Feb 19 20:59:27 crc systemd[1]: Starting Kubernetes Kubelet... Feb 19 20:59:28 crc restorecon[4681]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:28 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 20:59:29 crc restorecon[4681]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 20:59:29 crc restorecon[4681]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 19 20:59:30 crc kubenswrapper[4886]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 20:59:30 crc kubenswrapper[4886]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 19 20:59:30 crc kubenswrapper[4886]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 20:59:30 crc kubenswrapper[4886]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 20:59:30 crc kubenswrapper[4886]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 19 20:59:30 crc kubenswrapper[4886]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.139862 4886 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154352 4886 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154384 4886 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154394 4886 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154402 4886 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154410 4886 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154419 4886 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154428 4886 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154441 4886 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154467 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154476 4886 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154484 4886 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154492 4886 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154500 4886 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154508 4886 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154515 4886 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154523 4886 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154531 4886 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154539 4886 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154547 4886 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154555 4886 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154562 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154571 4886 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154578 4886 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154586 4886 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154595 4886 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154603 4886 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154610 4886 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154618 4886 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154626 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154633 4886 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154641 4886 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154649 4886 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154657 4886 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154665 4886 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154673 4886 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154681 4886 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154689 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154696 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154704 4886 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154714 4886 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154723 4886 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154733 4886 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154743 4886 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154752 4886 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154760 4886 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154768 4886 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154776 4886 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154785 4886 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154797 4886 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154805 4886 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154814 4886 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154822 4886 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154831 4886 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154839 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154848 4886 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154857 4886 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154865 4886 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154872 4886 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154881 4886 feature_gate.go:330] unrecognized feature gate: Example Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154888 4886 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154896 4886 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154903 4886 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154913 4886 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154924 4886 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154933 4886 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154941 4886 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154949 4886 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154960 4886 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154968 4886 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154976 4886 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.154986 4886 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156172 4886 flags.go:64] FLAG: --address="0.0.0.0" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156198 4886 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156214 4886 flags.go:64] FLAG: --anonymous-auth="true" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156226 4886 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156238 4886 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156247 4886 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156287 4886 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156299 4886 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156309 4886 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156319 4886 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156328 4886 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156370 4886 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156379 4886 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156388 4886 flags.go:64] FLAG: --cgroup-root="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156397 4886 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156406 4886 flags.go:64] FLAG: --client-ca-file="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156415 4886 flags.go:64] FLAG: --cloud-config="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156423 4886 flags.go:64] FLAG: --cloud-provider="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156432 4886 flags.go:64] FLAG: --cluster-dns="[]" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156446 4886 flags.go:64] FLAG: --cluster-domain="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156454 4886 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156464 4886 flags.go:64] FLAG: --config-dir="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156473 4886 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156483 4886 flags.go:64] FLAG: --container-log-max-files="5" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156494 4886 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156504 4886 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156514 4886 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156523 4886 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156533 4886 flags.go:64] FLAG: --contention-profiling="false" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156542 4886 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156550 4886 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156560 4886 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156570 4886 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156582 4886 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156592 4886 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156601 4886 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156610 4886 flags.go:64] FLAG: --enable-load-reader="false" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156619 4886 flags.go:64] FLAG: --enable-server="true" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156628 4886 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156639 4886 flags.go:64] FLAG: --event-burst="100" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156648 4886 flags.go:64] FLAG: --event-qps="50" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156657 4886 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156666 4886 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156675 4886 flags.go:64] FLAG: --eviction-hard="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156686 4886 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156695 4886 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156704 4886 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156713 4886 flags.go:64] FLAG: --eviction-soft="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156722 4886 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156730 4886 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156740 4886 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156749 4886 flags.go:64] FLAG: --experimental-mounter-path="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156758 4886 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156766 4886 flags.go:64] FLAG: --fail-swap-on="true" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156775 4886 flags.go:64] FLAG: --feature-gates="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156787 4886 flags.go:64] FLAG: --file-check-frequency="20s" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156796 4886 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156805 4886 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156814 4886 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156823 4886 flags.go:64] FLAG: --healthz-port="10248" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156832 4886 flags.go:64] FLAG: --help="false" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156841 4886 flags.go:64] FLAG: --hostname-override="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156850 4886 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156859 4886 flags.go:64] FLAG: --http-check-frequency="20s" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156872 4886 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156882 4886 flags.go:64] FLAG: --image-credential-provider-config="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156892 4886 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156901 4886 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156911 4886 flags.go:64] FLAG: --image-service-endpoint="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156920 4886 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156929 4886 flags.go:64] FLAG: --kube-api-burst="100" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156938 4886 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156948 4886 flags.go:64] FLAG: --kube-api-qps="50" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156956 4886 flags.go:64] FLAG: --kube-reserved="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156966 4886 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156975 4886 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156984 4886 flags.go:64] FLAG: --kubelet-cgroups="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.156992 4886 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157001 4886 flags.go:64] FLAG: --lock-file="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157010 4886 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157019 4886 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157028 4886 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157042 4886 flags.go:64] FLAG: --log-json-split-stream="false" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157051 4886 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157060 4886 flags.go:64] FLAG: --log-text-split-stream="false" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157069 4886 flags.go:64] FLAG: --logging-format="text" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157078 4886 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157089 4886 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157098 4886 flags.go:64] FLAG: --manifest-url="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157107 4886 flags.go:64] FLAG: --manifest-url-header="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157121 4886 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157130 4886 flags.go:64] FLAG: --max-open-files="1000000" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157141 4886 flags.go:64] FLAG: --max-pods="110" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157150 4886 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157159 4886 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157168 4886 flags.go:64] FLAG: --memory-manager-policy="None" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157177 4886 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157187 4886 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157195 4886 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157205 4886 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157225 4886 flags.go:64] FLAG: --node-status-max-images="50" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157235 4886 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157244 4886 flags.go:64] FLAG: --oom-score-adj="-999" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157253 4886 flags.go:64] FLAG: --pod-cidr="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157290 4886 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157305 4886 flags.go:64] FLAG: --pod-manifest-path="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157314 4886 flags.go:64] FLAG: --pod-max-pids="-1" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157323 4886 flags.go:64] FLAG: --pods-per-core="0" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157332 4886 flags.go:64] FLAG: --port="10250" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157341 4886 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157350 4886 flags.go:64] FLAG: --provider-id="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157359 4886 flags.go:64] FLAG: --qos-reserved="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157368 4886 flags.go:64] FLAG: --read-only-port="10255" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157377 4886 flags.go:64] FLAG: --register-node="true" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157386 4886 flags.go:64] FLAG: --register-schedulable="true" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157394 4886 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157409 4886 flags.go:64] FLAG: --registry-burst="10" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157418 4886 flags.go:64] FLAG: --registry-qps="5" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157427 4886 flags.go:64] FLAG: --reserved-cpus="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157437 4886 flags.go:64] FLAG: --reserved-memory="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157448 4886 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157457 4886 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157467 4886 flags.go:64] FLAG: --rotate-certificates="false" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157476 4886 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157484 4886 flags.go:64] FLAG: --runonce="false" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157493 4886 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157503 4886 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157513 4886 flags.go:64] FLAG: --seccomp-default="false" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157522 4886 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157531 4886 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157541 4886 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157550 4886 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157560 4886 flags.go:64] FLAG: --storage-driver-password="root" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157568 4886 flags.go:64] FLAG: --storage-driver-secure="false" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157577 4886 flags.go:64] FLAG: --storage-driver-table="stats" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157586 4886 flags.go:64] FLAG: --storage-driver-user="root" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157595 4886 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157605 4886 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157614 4886 flags.go:64] FLAG: --system-cgroups="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157623 4886 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157638 4886 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157646 4886 flags.go:64] FLAG: --tls-cert-file="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157655 4886 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157667 4886 flags.go:64] FLAG: --tls-min-version="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157676 4886 flags.go:64] FLAG: --tls-private-key-file="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157685 4886 flags.go:64] FLAG: --topology-manager-policy="none" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157694 4886 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157703 4886 flags.go:64] FLAG: --topology-manager-scope="container" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157712 4886 flags.go:64] FLAG: --v="2" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157724 4886 flags.go:64] FLAG: --version="false" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157736 4886 flags.go:64] FLAG: --vmodule="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157747 4886 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.157756 4886 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.157989 4886 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158001 4886 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158010 4886 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158019 4886 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158027 4886 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158036 4886 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158047 4886 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158057 4886 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158067 4886 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158075 4886 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158084 4886 feature_gate.go:330] unrecognized feature gate: Example Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158093 4886 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158101 4886 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158109 4886 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158117 4886 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158125 4886 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158134 4886 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158142 4886 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158150 4886 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158157 4886 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158165 4886 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158173 4886 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158180 4886 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158190 4886 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158198 4886 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158206 4886 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158214 4886 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158221 4886 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158229 4886 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158239 4886 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158250 4886 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158264 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158296 4886 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158305 4886 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158313 4886 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158323 4886 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158332 4886 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158341 4886 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158349 4886 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158390 4886 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158400 4886 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158411 4886 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158421 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158431 4886 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158439 4886 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158447 4886 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158455 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158463 4886 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158471 4886 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158478 4886 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158487 4886 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158494 4886 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158502 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158510 4886 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158520 4886 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158529 4886 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158537 4886 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158544 4886 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158552 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158569 4886 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158577 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158585 4886 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158594 4886 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158602 4886 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158609 4886 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158617 4886 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158624 4886 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158632 4886 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158640 4886 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158648 4886 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.158656 4886 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.159667 4886 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.177364 4886 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.177427 4886 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177561 4886 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177575 4886 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177585 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177596 4886 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177605 4886 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177614 4886 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177623 4886 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177634 4886 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177643 4886 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177653 4886 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177662 4886 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177670 4886 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177677 4886 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177686 4886 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177693 4886 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177701 4886 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177709 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177719 4886 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177731 4886 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177740 4886 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177749 4886 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177758 4886 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177766 4886 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177775 4886 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177783 4886 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177791 4886 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177798 4886 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177806 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177815 4886 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177823 4886 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177830 4886 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177838 4886 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177848 4886 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177858 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177869 4886 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177878 4886 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177888 4886 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177898 4886 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177909 4886 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177919 4886 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177929 4886 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177939 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177949 4886 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177959 4886 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177970 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177980 4886 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.177990 4886 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178000 4886 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178011 4886 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178025 4886 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178034 4886 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178042 4886 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178051 4886 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178060 4886 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178069 4886 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178077 4886 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178087 4886 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178095 4886 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178103 4886 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178111 4886 feature_gate.go:330] unrecognized feature gate: Example Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178120 4886 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178127 4886 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178135 4886 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178143 4886 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178151 4886 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178160 4886 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178167 4886 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178175 4886 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178183 4886 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178193 4886 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178206 4886 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.178221 4886 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178489 4886 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178506 4886 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178516 4886 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178525 4886 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178534 4886 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178568 4886 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178577 4886 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178586 4886 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178598 4886 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178612 4886 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178622 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178631 4886 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178639 4886 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178648 4886 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178660 4886 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178670 4886 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178681 4886 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178690 4886 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178700 4886 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178708 4886 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178717 4886 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178726 4886 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178734 4886 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178743 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178752 4886 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178760 4886 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178769 4886 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178778 4886 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178786 4886 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178796 4886 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178804 4886 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178812 4886 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178820 4886 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178828 4886 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178837 4886 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178848 4886 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178858 4886 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178866 4886 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178874 4886 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178882 4886 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178890 4886 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178898 4886 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178905 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178913 4886 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178922 4886 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178929 4886 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178937 4886 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178945 4886 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178952 4886 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178960 4886 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178969 4886 feature_gate.go:330] unrecognized feature gate: Example Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178977 4886 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178985 4886 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.178992 4886 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.179000 4886 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.179008 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.179016 4886 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.179023 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.179031 4886 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.179041 4886 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.179051 4886 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.179059 4886 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.179068 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.179077 4886 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.179085 4886 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.179094 4886 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.179102 4886 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.179110 4886 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.179119 4886 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.179127 4886 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.179137 4886 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.179150 4886 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.186966 4886 server.go:940] "Client rotation is on, will bootstrap in background" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.221944 4886 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.222128 4886 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.228098 4886 server.go:997] "Starting client certificate rotation" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.228142 4886 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.228446 4886 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-04 07:44:59.336537548 +0000 UTC Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.228573 4886 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.295041 4886 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 20:59:30 crc kubenswrapper[4886]: E0219 20:59:30.296325 4886 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.303342 4886 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.337169 4886 log.go:25] "Validated CRI v1 runtime API" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.411942 4886 log.go:25] "Validated CRI v1 image API" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.414949 4886 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.420999 4886 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-19-20-55-09-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.421082 4886 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.453452 4886 manager.go:217] Machine: {Timestamp:2026-02-19 20:59:30.448211395 +0000 UTC m=+1.076054515 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:c49a73fe-5379-4c2f-8aa6-d3284e3163e6 BootID:82b5df03-b579-404f-8f39-285e5c50c205 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:e0:6a:6f Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:e0:6a:6f Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:69:5e:99 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:44:78:52 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:af:e0:5a Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:cd:f4:66 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:22:55:0a:dc:d5:a4 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:b6:11:24:6b:cc:d8 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.453881 4886 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.454098 4886 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.456395 4886 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.456755 4886 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.456867 4886 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.457220 4886 topology_manager.go:138] "Creating topology manager with none policy" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.457243 4886 container_manager_linux.go:303] "Creating device plugin manager" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.458417 4886 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.458460 4886 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.462780 4886 state_mem.go:36] "Initialized new in-memory state store" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.462904 4886 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.472059 4886 kubelet.go:418] "Attempting to sync node with API server" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.472093 4886 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.472155 4886 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.472173 4886 kubelet.go:324] "Adding apiserver pod source" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.472189 4886 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.484803 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Feb 19 20:59:30 crc kubenswrapper[4886]: E0219 20:59:30.484922 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.484933 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Feb 19 20:59:30 crc kubenswrapper[4886]: E0219 20:59:30.485106 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.488076 4886 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.489547 4886 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.493854 4886 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.495881 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.495917 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.495929 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.495939 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.495966 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.495976 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.495985 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.495998 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.496012 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.496030 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.496046 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.496056 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.499483 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.500158 4886 server.go:1280] "Started kubelet" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.500239 4886 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.501476 4886 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.501479 4886 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.502227 4886 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 19 20:59:30 crc systemd[1]: Started Kubernetes Kubelet. Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.503670 4886 server.go:460] "Adding debug handlers to kubelet server" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.504527 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.504706 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 17:01:41.75313675 +0000 UTC Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.505264 4886 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.505544 4886 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.505552 4886 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.505731 4886 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.506777 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Feb 19 20:59:30 crc kubenswrapper[4886]: E0219 20:59:30.506858 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Feb 19 20:59:30 crc kubenswrapper[4886]: E0219 20:59:30.506969 4886 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.508167 4886 factory.go:55] Registering systemd factory Feb 19 20:59:30 crc kubenswrapper[4886]: E0219 20:59:30.510448 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="200ms" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.510782 4886 factory.go:221] Registration of the systemd container factory successfully Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.511245 4886 factory.go:153] Registering CRI-O factory Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.511308 4886 factory.go:221] Registration of the crio container factory successfully Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.511417 4886 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.511458 4886 factory.go:103] Registering Raw factory Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.511484 4886 manager.go:1196] Started watching for new ooms in manager Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.515313 4886 manager.go:319] Starting recovery of all containers Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516079 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516184 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516213 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516234 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516255 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516308 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516329 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516349 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516371 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516391 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516411 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516431 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516451 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516476 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516496 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516549 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516584 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516604 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516623 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516645 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516666 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516687 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516707 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516735 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516765 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516794 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516824 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516854 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516879 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516907 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516934 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516960 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.516981 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517001 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517020 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517041 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517062 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517083 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517101 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517120 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517139 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517159 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517182 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517203 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517225 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517244 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517526 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517568 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517588 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517608 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517628 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517648 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517677 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517702 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517726 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517748 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517770 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517792 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517816 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517837 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517857 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517877 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517900 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517925 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517949 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517969 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.517989 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518008 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518028 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518049 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518070 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518089 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518109 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518130 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518149 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518169 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518189 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518209 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518228 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518252 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518305 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518331 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518361 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518380 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518441 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518464 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518485 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518505 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518527 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518547 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518569 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518589 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518609 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518661 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518683 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518707 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518727 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518747 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518770 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518790 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518810 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518831 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518852 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518872 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518902 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518924 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518945 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518967 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.518988 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.519012 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.519034 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.519056 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.536466 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.536671 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.536701 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.536737 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.536759 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.536785 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.536825 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.536854 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.536884 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.536926 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.536958 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.537000 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.537036 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.537068 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.537107 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.537139 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.537180 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.537213 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.537244 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.537325 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.537359 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.537402 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.537437 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.537502 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: E0219 20:59:30.516582 4886 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.30:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1895c17cdee998e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 20:59:30.500114658 +0000 UTC m=+1.127957718,LastTimestamp:2026-02-19 20:59:30.500114658 +0000 UTC m=+1.127957718,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.537548 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.538098 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.540955 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.542581 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.542639 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.542670 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.542690 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.542732 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.542762 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.542781 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.542809 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.542827 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.542845 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.542868 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.542889 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.542928 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.542997 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543020 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543047 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543066 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543084 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543120 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543138 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543160 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543177 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543198 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543224 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543241 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543284 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543319 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543341 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543362 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543380 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543403 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543423 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543442 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543468 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543485 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543529 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543547 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543563 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543584 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543601 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543623 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543639 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543658 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543677 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543695 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543780 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543805 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543833 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543852 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543921 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.543969 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.544025 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.544081 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.544109 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.544146 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.544171 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.544204 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.544229 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.544254 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.544325 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.544350 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.544383 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.549208 4886 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.549256 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.549298 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.549315 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.549330 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.549344 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.549366 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.549377 4886 reconstruct.go:97] "Volume reconstruction finished" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.549385 4886 reconciler.go:26] "Reconciler: start to sync state" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.559189 4886 manager.go:324] Recovery completed Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.569973 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.572075 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.572127 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.572140 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.572860 4886 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.572882 4886 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.572904 4886 state_mem.go:36] "Initialized new in-memory state store" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.597202 4886 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.599772 4886 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.599831 4886 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.599877 4886 kubelet.go:2335] "Starting kubelet main sync loop" Feb 19 20:59:30 crc kubenswrapper[4886]: E0219 20:59:30.600069 4886 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 19 20:59:30 crc kubenswrapper[4886]: W0219 20:59:30.600753 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Feb 19 20:59:30 crc kubenswrapper[4886]: E0219 20:59:30.600903 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Feb 19 20:59:30 crc kubenswrapper[4886]: E0219 20:59:30.607868 4886 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.614872 4886 policy_none.go:49] "None policy: Start" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.616155 4886 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.616193 4886 state_mem.go:35] "Initializing new in-memory state store" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.687826 4886 manager.go:334] "Starting Device Plugin manager" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.687885 4886 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.687898 4886 server.go:79] "Starting device plugin registration server" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.688610 4886 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.688642 4886 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.688785 4886 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.688880 4886 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.688889 4886 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 19 20:59:30 crc kubenswrapper[4886]: E0219 20:59:30.696417 4886 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.701179 4886 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.701281 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.702563 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.702632 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.702660 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.702965 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.703247 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.703325 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.704630 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.704653 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.704690 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.704703 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.704727 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.704709 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.705003 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.705213 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.705322 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.706321 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.706363 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.706376 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.706474 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.706514 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.706530 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.706701 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.706741 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.706964 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.708027 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.708076 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.708088 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.708158 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.708185 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.708202 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.708427 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.708617 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.708679 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.709594 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.709620 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.709629 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.709639 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.709666 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.709682 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.709799 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.709823 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.710540 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.710562 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.710572 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:30 crc kubenswrapper[4886]: E0219 20:59:30.711169 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="400ms" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.751630 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.751671 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.751689 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.751720 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.751744 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.751771 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.751792 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.751809 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.751828 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.751849 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.751891 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.788724 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.789669 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.789705 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.789717 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.789741 4886 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 20:59:30 crc kubenswrapper[4886]: E0219 20:59:30.790254 4886 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.30:6443: connect: connection refused" node="crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.853409 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.853669 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.853749 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.854090 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.854293 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.854430 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.854524 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.854468 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.854716 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.854550 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.854797 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.854827 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.854854 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.854891 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.854924 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.854951 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.854979 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.855016 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.855051 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.855317 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.855455 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.855588 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.855627 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.855656 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.855687 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.855251 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.956184 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.956640 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.956597 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.956769 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.957081 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.957201 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.957316 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.957131 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.990863 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.992575 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.992613 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.992625 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:30 crc kubenswrapper[4886]: I0219 20:59:30.992656 4886 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 20:59:30 crc kubenswrapper[4886]: E0219 20:59:30.993110 4886 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.30:6443: connect: connection refused" node="crc" Feb 19 20:59:31 crc kubenswrapper[4886]: I0219 20:59:31.041138 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 20:59:31 crc kubenswrapper[4886]: I0219 20:59:31.054296 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 20:59:31 crc kubenswrapper[4886]: I0219 20:59:31.085142 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 20:59:31 crc kubenswrapper[4886]: E0219 20:59:31.112508 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="800ms" Feb 19 20:59:31 crc kubenswrapper[4886]: I0219 20:59:31.117786 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 20:59:31 crc kubenswrapper[4886]: I0219 20:59:31.131311 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 20:59:31 crc kubenswrapper[4886]: W0219 20:59:31.168863 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-f67007b3cb8139d1547d74d44338c6c5506dffd07a4ca8b23dea8b45f935728c WatchSource:0}: Error finding container f67007b3cb8139d1547d74d44338c6c5506dffd07a4ca8b23dea8b45f935728c: Status 404 returned error can't find the container with id f67007b3cb8139d1547d74d44338c6c5506dffd07a4ca8b23dea8b45f935728c Feb 19 20:59:31 crc kubenswrapper[4886]: W0219 20:59:31.177288 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-953e764ded11e67feeef87524bcc95083e31d674760f17b06aa78f71ade58b08 WatchSource:0}: Error finding container 953e764ded11e67feeef87524bcc95083e31d674760f17b06aa78f71ade58b08: Status 404 returned error can't find the container with id 953e764ded11e67feeef87524bcc95083e31d674760f17b06aa78f71ade58b08 Feb 19 20:59:31 crc kubenswrapper[4886]: W0219 20:59:31.185008 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-f1fa6087d9393eb1f29955ad3b392c3588ea476a50a87fd91920b95909f890f3 WatchSource:0}: Error finding container f1fa6087d9393eb1f29955ad3b392c3588ea476a50a87fd91920b95909f890f3: Status 404 returned error can't find the container with id f1fa6087d9393eb1f29955ad3b392c3588ea476a50a87fd91920b95909f890f3 Feb 19 20:59:31 crc kubenswrapper[4886]: W0219 20:59:31.192189 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-9eb63c628c29ae4a1a5aa7a8eba00fdfd8ce511913a05fcd7652a1e6ce985fff WatchSource:0}: Error finding container 9eb63c628c29ae4a1a5aa7a8eba00fdfd8ce511913a05fcd7652a1e6ce985fff: Status 404 returned error can't find the container with id 9eb63c628c29ae4a1a5aa7a8eba00fdfd8ce511913a05fcd7652a1e6ce985fff Feb 19 20:59:31 crc kubenswrapper[4886]: W0219 20:59:31.193862 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-0eaeeda3a8322b2d8200b4ab81d319519a8b7e987afbbd070f05bae121e138f8 WatchSource:0}: Error finding container 0eaeeda3a8322b2d8200b4ab81d319519a8b7e987afbbd070f05bae121e138f8: Status 404 returned error can't find the container with id 0eaeeda3a8322b2d8200b4ab81d319519a8b7e987afbbd070f05bae121e138f8 Feb 19 20:59:31 crc kubenswrapper[4886]: W0219 20:59:31.346737 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Feb 19 20:59:31 crc kubenswrapper[4886]: E0219 20:59:31.346865 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Feb 19 20:59:31 crc kubenswrapper[4886]: I0219 20:59:31.394237 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:31 crc kubenswrapper[4886]: I0219 20:59:31.395997 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:31 crc kubenswrapper[4886]: I0219 20:59:31.396045 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:31 crc kubenswrapper[4886]: I0219 20:59:31.396063 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:31 crc kubenswrapper[4886]: I0219 20:59:31.396098 4886 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 20:59:31 crc kubenswrapper[4886]: E0219 20:59:31.396822 4886 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.30:6443: connect: connection refused" node="crc" Feb 19 20:59:31 crc kubenswrapper[4886]: W0219 20:59:31.451965 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Feb 19 20:59:31 crc kubenswrapper[4886]: E0219 20:59:31.452081 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Feb 19 20:59:31 crc kubenswrapper[4886]: W0219 20:59:31.482111 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Feb 19 20:59:31 crc kubenswrapper[4886]: E0219 20:59:31.482187 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Feb 19 20:59:31 crc kubenswrapper[4886]: I0219 20:59:31.502402 4886 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Feb 19 20:59:31 crc kubenswrapper[4886]: I0219 20:59:31.505462 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 19:38:14.130759768 +0000 UTC Feb 19 20:59:31 crc kubenswrapper[4886]: W0219 20:59:31.539426 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Feb 19 20:59:31 crc kubenswrapper[4886]: E0219 20:59:31.539505 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Feb 19 20:59:31 crc kubenswrapper[4886]: I0219 20:59:31.607010 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f67007b3cb8139d1547d74d44338c6c5506dffd07a4ca8b23dea8b45f935728c"} Feb 19 20:59:31 crc kubenswrapper[4886]: I0219 20:59:31.608230 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9eb63c628c29ae4a1a5aa7a8eba00fdfd8ce511913a05fcd7652a1e6ce985fff"} Feb 19 20:59:31 crc kubenswrapper[4886]: I0219 20:59:31.610331 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f1fa6087d9393eb1f29955ad3b392c3588ea476a50a87fd91920b95909f890f3"} Feb 19 20:59:31 crc kubenswrapper[4886]: I0219 20:59:31.611768 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0eaeeda3a8322b2d8200b4ab81d319519a8b7e987afbbd070f05bae121e138f8"} Feb 19 20:59:31 crc kubenswrapper[4886]: I0219 20:59:31.614194 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"953e764ded11e67feeef87524bcc95083e31d674760f17b06aa78f71ade58b08"} Feb 19 20:59:31 crc kubenswrapper[4886]: E0219 20:59:31.913966 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="1.6s" Feb 19 20:59:32 crc kubenswrapper[4886]: I0219 20:59:32.197605 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:32 crc kubenswrapper[4886]: I0219 20:59:32.199433 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:32 crc kubenswrapper[4886]: I0219 20:59:32.199500 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:32 crc kubenswrapper[4886]: I0219 20:59:32.199521 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:32 crc kubenswrapper[4886]: I0219 20:59:32.199565 4886 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 20:59:32 crc kubenswrapper[4886]: E0219 20:59:32.200234 4886 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.30:6443: connect: connection refused" node="crc" Feb 19 20:59:32 crc kubenswrapper[4886]: I0219 20:59:32.356664 4886 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 20:59:32 crc kubenswrapper[4886]: E0219 20:59:32.358264 4886 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Feb 19 20:59:32 crc kubenswrapper[4886]: I0219 20:59:32.501925 4886 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Feb 19 20:59:32 crc kubenswrapper[4886]: I0219 20:59:32.505820 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 23:33:47.255861907 +0000 UTC Feb 19 20:59:32 crc kubenswrapper[4886]: I0219 20:59:32.618147 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"08a1ee93fdf51512bcee5f25703735a2fee9ad0f2c1ffc77bde9db6135bf830d"} Feb 19 20:59:32 crc kubenswrapper[4886]: I0219 20:59:32.619988 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"111bd67d90ee569c1b2e62757957dc4e871121561ff4eb507278cd8cfc637412"} Feb 19 20:59:32 crc kubenswrapper[4886]: I0219 20:59:32.621593 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"006c62d0e5822ff5be8928565b24d3928faad03a7eed4c1e00cb7f97170723a0"} Feb 19 20:59:32 crc kubenswrapper[4886]: I0219 20:59:32.623554 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3"} Feb 19 20:59:32 crc kubenswrapper[4886]: I0219 20:59:32.624734 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f"} Feb 19 20:59:32 crc kubenswrapper[4886]: I0219 20:59:32.625022 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:32 crc kubenswrapper[4886]: I0219 20:59:32.626354 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:32 crc kubenswrapper[4886]: I0219 20:59:32.626387 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:32 crc kubenswrapper[4886]: I0219 20:59:32.626395 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:33 crc kubenswrapper[4886]: W0219 20:59:33.429981 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Feb 19 20:59:33 crc kubenswrapper[4886]: E0219 20:59:33.430097 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.502624 4886 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.506707 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 08:35:24.736925391 +0000 UTC Feb 19 20:59:33 crc kubenswrapper[4886]: E0219 20:59:33.515775 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="3.2s" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.632403 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6"} Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.632490 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef"} Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.632507 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.632522 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79"} Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.634004 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.634076 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.634104 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.636479 4886 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f" exitCode=0 Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.636610 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f"} Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.636653 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.638021 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.638080 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.638095 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.640426 4886 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="08a1ee93fdf51512bcee5f25703735a2fee9ad0f2c1ffc77bde9db6135bf830d" exitCode=0 Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.640478 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"08a1ee93fdf51512bcee5f25703735a2fee9ad0f2c1ffc77bde9db6135bf830d"} Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.640596 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.641074 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.642074 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.642119 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.642131 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.642521 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.642575 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.642604 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.643859 4886 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="111bd67d90ee569c1b2e62757957dc4e871121561ff4eb507278cd8cfc637412" exitCode=0 Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.643959 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"111bd67d90ee569c1b2e62757957dc4e871121561ff4eb507278cd8cfc637412"} Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.644007 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.645129 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.645170 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.645192 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.647949 4886 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="006c62d0e5822ff5be8928565b24d3928faad03a7eed4c1e00cb7f97170723a0" exitCode=0 Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.648010 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"006c62d0e5822ff5be8928565b24d3928faad03a7eed4c1e00cb7f97170723a0"} Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.648030 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.649318 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.649379 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.649399 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.800422 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.801699 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.801754 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.801772 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.801807 4886 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 20:59:33 crc kubenswrapper[4886]: E0219 20:59:33.802398 4886 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.30:6443: connect: connection refused" node="crc" Feb 19 20:59:33 crc kubenswrapper[4886]: W0219 20:59:33.893219 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Feb 19 20:59:33 crc kubenswrapper[4886]: E0219 20:59:33.893317 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Feb 19 20:59:33 crc kubenswrapper[4886]: I0219 20:59:33.965476 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.502245 4886 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.507424 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 22:32:34.082835388 +0000 UTC Feb 19 20:59:34 crc kubenswrapper[4886]: W0219 20:59:34.536358 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Feb 19 20:59:34 crc kubenswrapper[4886]: E0219 20:59:34.536455 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Feb 19 20:59:34 crc kubenswrapper[4886]: W0219 20:59:34.641602 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Feb 19 20:59:34 crc kubenswrapper[4886]: E0219 20:59:34.641713 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.652756 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d23024e5a3956e078a2acd11d0c498d2eec026c002a01d0f24a8e08622b20e1e"} Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.652835 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.653925 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.653960 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.653974 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.655730 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e6374bb4d3110394978d5438ec9d728e7d5a0b926555d6aa0cff5582ff326ca5"} Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.655752 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e30802e3825dde5e5a0fcaa97b68b101cbf379a89ae031f9e6825ab9d8300ace"} Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.655762 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f44c38ce1d45d528816fef389df305927a56ce194d7a4f9008c53ba1c8c3872d"} Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.655839 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.656622 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.656643 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.656651 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.659050 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c"} Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.659070 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6"} Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.659080 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b"} Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.660898 4886 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="44382524ebd341b0c95848c826c53e0f1015d861cea41eb7e5a77a8c445f677a" exitCode=0 Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.660983 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.661304 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.661548 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"44382524ebd341b0c95848c826c53e0f1015d861cea41eb7e5a77a8c445f677a"} Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.662017 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.662034 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.662043 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.662443 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.662471 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:34 crc kubenswrapper[4886]: I0219 20:59:34.662478 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.501461 4886 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.544845 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 02:36:17.27902064 +0000 UTC Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.666876 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8a39f577177775068bf27c0345183eb150c3b194285e0752b6588e4704147bf7"} Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.666940 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a"} Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.667101 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.668257 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.668326 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.668343 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.671582 4886 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1efa899717c3963c476f4d658c00b7853e8f06c26395cfbb6e94ca653e7afe92" exitCode=0 Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.671741 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.672448 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.672996 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1efa899717c3963c476f4d658c00b7853e8f06c26395cfbb6e94ca653e7afe92"} Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.673094 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.673127 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.673510 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.674299 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.674407 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.674499 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.674507 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.674633 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.674658 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.674313 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.674868 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.674893 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.675851 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.675913 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:35 crc kubenswrapper[4886]: I0219 20:59:35.675935 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:36 crc kubenswrapper[4886]: I0219 20:59:36.069638 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 20:59:36 crc kubenswrapper[4886]: I0219 20:59:36.417113 4886 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 20:59:36 crc kubenswrapper[4886]: I0219 20:59:36.545187 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 04:26:27.445171343 +0000 UTC Feb 19 20:59:36 crc kubenswrapper[4886]: I0219 20:59:36.676298 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 20:59:36 crc kubenswrapper[4886]: I0219 20:59:36.678647 4886 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8a39f577177775068bf27c0345183eb150c3b194285e0752b6588e4704147bf7" exitCode=255 Feb 19 20:59:36 crc kubenswrapper[4886]: I0219 20:59:36.678744 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8a39f577177775068bf27c0345183eb150c3b194285e0752b6588e4704147bf7"} Feb 19 20:59:36 crc kubenswrapper[4886]: I0219 20:59:36.678803 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:36 crc kubenswrapper[4886]: I0219 20:59:36.679726 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:36 crc kubenswrapper[4886]: I0219 20:59:36.679765 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:36 crc kubenswrapper[4886]: I0219 20:59:36.679781 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:36 crc kubenswrapper[4886]: I0219 20:59:36.680378 4886 scope.go:117] "RemoveContainer" containerID="8a39f577177775068bf27c0345183eb150c3b194285e0752b6588e4704147bf7" Feb 19 20:59:36 crc kubenswrapper[4886]: I0219 20:59:36.681902 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"51b08f67ec18fba981533d2675bcb08289ddcc21966286ff4fd7d688bb82cccc"} Feb 19 20:59:36 crc kubenswrapper[4886]: I0219 20:59:36.681931 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:36 crc kubenswrapper[4886]: I0219 20:59:36.681938 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"10eab12f9350be6bb42736c76ac37415554e6f7cdee58b2499a6f3b5422885b7"} Feb 19 20:59:36 crc kubenswrapper[4886]: I0219 20:59:36.682897 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:36 crc kubenswrapper[4886]: I0219 20:59:36.682935 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:36 crc kubenswrapper[4886]: I0219 20:59:36.682955 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.003046 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.004229 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.004332 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.004351 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.004392 4886 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.350530 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.350773 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.352349 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.352508 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.352605 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.359384 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.545728 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 01:07:48.550962282 +0000 UTC Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.690104 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1d726ac44591f4c7bfdaed9f962e7ec242327a21d21d87a0e8d7b21281395438"} Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.690157 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5a5425b5cf72feec88a6d60b2118b58873d70ea39d6f09d6677e455bd0419f79"} Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.690172 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7a5c19b9ae0648d49c14d26082450de3e32b86813df4ec6d0234215c73d2a825"} Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.690230 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.691695 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.691746 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.691763 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.693607 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.695827 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666"} Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.695871 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.695917 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.695888 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.696999 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.697037 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.697050 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.697249 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.697317 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:37 crc kubenswrapper[4886]: I0219 20:59:37.697336 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:38 crc kubenswrapper[4886]: I0219 20:59:38.320927 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 20:59:38 crc kubenswrapper[4886]: I0219 20:59:38.546405 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:13:14.649212049 +0000 UTC Feb 19 20:59:38 crc kubenswrapper[4886]: I0219 20:59:38.701343 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 20:59:38 crc kubenswrapper[4886]: I0219 20:59:38.701413 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:38 crc kubenswrapper[4886]: I0219 20:59:38.701482 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:38 crc kubenswrapper[4886]: I0219 20:59:38.702745 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:38 crc kubenswrapper[4886]: I0219 20:59:38.702805 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:38 crc kubenswrapper[4886]: I0219 20:59:38.702829 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:38 crc kubenswrapper[4886]: I0219 20:59:38.703443 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:38 crc kubenswrapper[4886]: I0219 20:59:38.703513 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:38 crc kubenswrapper[4886]: I0219 20:59:38.703539 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:39 crc kubenswrapper[4886]: I0219 20:59:39.438152 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 19 20:59:39 crc kubenswrapper[4886]: I0219 20:59:39.546745 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 15:28:37.755339721 +0000 UTC Feb 19 20:59:39 crc kubenswrapper[4886]: I0219 20:59:39.703680 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:39 crc kubenswrapper[4886]: I0219 20:59:39.704053 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 20:59:39 crc kubenswrapper[4886]: I0219 20:59:39.704561 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:39 crc kubenswrapper[4886]: I0219 20:59:39.705714 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:39 crc kubenswrapper[4886]: I0219 20:59:39.705772 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:39 crc kubenswrapper[4886]: I0219 20:59:39.705785 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:39 crc kubenswrapper[4886]: I0219 20:59:39.705975 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:39 crc kubenswrapper[4886]: I0219 20:59:39.706012 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:39 crc kubenswrapper[4886]: I0219 20:59:39.706031 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:39 crc kubenswrapper[4886]: I0219 20:59:39.922417 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 19 20:59:40 crc kubenswrapper[4886]: I0219 20:59:40.113897 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 20:59:40 crc kubenswrapper[4886]: I0219 20:59:40.137983 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 20:59:40 crc kubenswrapper[4886]: I0219 20:59:40.260531 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 20:59:40 crc kubenswrapper[4886]: I0219 20:59:40.260827 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:40 crc kubenswrapper[4886]: I0219 20:59:40.262406 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:40 crc kubenswrapper[4886]: I0219 20:59:40.262463 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:40 crc kubenswrapper[4886]: I0219 20:59:40.262486 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:40 crc kubenswrapper[4886]: I0219 20:59:40.548101 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 16:19:05.963745827 +0000 UTC Feb 19 20:59:40 crc kubenswrapper[4886]: E0219 20:59:40.696528 4886 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 20:59:40 crc kubenswrapper[4886]: I0219 20:59:40.706355 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:40 crc kubenswrapper[4886]: I0219 20:59:40.706358 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:40 crc kubenswrapper[4886]: I0219 20:59:40.707979 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:40 crc kubenswrapper[4886]: I0219 20:59:40.708041 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:40 crc kubenswrapper[4886]: I0219 20:59:40.708068 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:40 crc kubenswrapper[4886]: I0219 20:59:40.707995 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:40 crc kubenswrapper[4886]: I0219 20:59:40.708298 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:40 crc kubenswrapper[4886]: I0219 20:59:40.708329 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:41 crc kubenswrapper[4886]: I0219 20:59:41.548813 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 14:53:52.260218406 +0000 UTC Feb 19 20:59:41 crc kubenswrapper[4886]: I0219 20:59:41.709319 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:41 crc kubenswrapper[4886]: I0219 20:59:41.710512 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:41 crc kubenswrapper[4886]: I0219 20:59:41.710615 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:41 crc kubenswrapper[4886]: I0219 20:59:41.710640 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:42 crc kubenswrapper[4886]: I0219 20:59:42.549186 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 20:40:13.866172129 +0000 UTC Feb 19 20:59:42 crc kubenswrapper[4886]: I0219 20:59:42.701533 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 20:59:42 crc kubenswrapper[4886]: I0219 20:59:42.701770 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:42 crc kubenswrapper[4886]: I0219 20:59:42.703645 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:42 crc kubenswrapper[4886]: I0219 20:59:42.703716 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:42 crc kubenswrapper[4886]: I0219 20:59:42.703735 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:42 crc kubenswrapper[4886]: I0219 20:59:42.709145 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 20:59:42 crc kubenswrapper[4886]: I0219 20:59:42.713392 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:42 crc kubenswrapper[4886]: I0219 20:59:42.715000 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:42 crc kubenswrapper[4886]: I0219 20:59:42.715065 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:42 crc kubenswrapper[4886]: I0219 20:59:42.715089 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:43 crc kubenswrapper[4886]: I0219 20:59:43.549395 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 12:54:32.918990588 +0000 UTC Feb 19 20:59:44 crc kubenswrapper[4886]: I0219 20:59:44.549925 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 00:45:34.100910969 +0000 UTC Feb 19 20:59:45 crc kubenswrapper[4886]: I0219 20:59:45.551006 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 07:24:41.946472746 +0000 UTC Feb 19 20:59:45 crc kubenswrapper[4886]: I0219 20:59:45.701861 4886 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:59:45 crc kubenswrapper[4886]: I0219 20:59:45.701953 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:59:46 crc kubenswrapper[4886]: E0219 20:59:46.419514 4886 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 19 20:59:46 crc kubenswrapper[4886]: I0219 20:59:46.502711 4886 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 19 20:59:46 crc kubenswrapper[4886]: I0219 20:59:46.552067 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 00:30:29.456860855 +0000 UTC Feb 19 20:59:46 crc kubenswrapper[4886]: E0219 20:59:46.716382 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 19 20:59:47 crc kubenswrapper[4886]: E0219 20:59:47.005713 4886 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Feb 19 20:59:47 crc kubenswrapper[4886]: I0219 20:59:47.071290 4886 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 20:59:47 crc kubenswrapper[4886]: I0219 20:59:47.071350 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 19 20:59:47 crc kubenswrapper[4886]: I0219 20:59:47.077775 4886 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 20:59:47 crc kubenswrapper[4886]: I0219 20:59:47.077995 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 19 20:59:47 crc kubenswrapper[4886]: I0219 20:59:47.553252 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 01:12:03.202139644 +0000 UTC Feb 19 20:59:48 crc kubenswrapper[4886]: I0219 20:59:48.554077 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 20:39:24.09839924 +0000 UTC Feb 19 20:59:49 crc kubenswrapper[4886]: I0219 20:59:49.554369 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 22:53:47.1790352 +0000 UTC Feb 19 20:59:49 crc kubenswrapper[4886]: I0219 20:59:49.962551 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 19 20:59:49 crc kubenswrapper[4886]: I0219 20:59:49.962809 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:49 crc kubenswrapper[4886]: I0219 20:59:49.964980 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:49 crc kubenswrapper[4886]: I0219 20:59:49.965039 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:49 crc kubenswrapper[4886]: I0219 20:59:49.965060 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:49 crc kubenswrapper[4886]: I0219 20:59:49.983778 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 19 20:59:50 crc kubenswrapper[4886]: I0219 20:59:50.122467 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 20:59:50 crc kubenswrapper[4886]: I0219 20:59:50.122792 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:50 crc kubenswrapper[4886]: I0219 20:59:50.123215 4886 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 19 20:59:50 crc kubenswrapper[4886]: I0219 20:59:50.123347 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 19 20:59:50 crc kubenswrapper[4886]: I0219 20:59:50.125157 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:50 crc kubenswrapper[4886]: I0219 20:59:50.125236 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:50 crc kubenswrapper[4886]: I0219 20:59:50.125256 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:50 crc kubenswrapper[4886]: I0219 20:59:50.129807 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 20:59:50 crc kubenswrapper[4886]: I0219 20:59:50.139250 4886 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 19 20:59:50 crc kubenswrapper[4886]: I0219 20:59:50.139356 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 19 20:59:50 crc kubenswrapper[4886]: I0219 20:59:50.555200 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 01:50:49.49502781 +0000 UTC Feb 19 20:59:50 crc kubenswrapper[4886]: E0219 20:59:50.696668 4886 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 20:59:50 crc kubenswrapper[4886]: I0219 20:59:50.737146 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:50 crc kubenswrapper[4886]: I0219 20:59:50.737223 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:50 crc kubenswrapper[4886]: I0219 20:59:50.737900 4886 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 19 20:59:50 crc kubenswrapper[4886]: I0219 20:59:50.737999 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 19 20:59:50 crc kubenswrapper[4886]: I0219 20:59:50.738481 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:50 crc kubenswrapper[4886]: I0219 20:59:50.738512 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:50 crc kubenswrapper[4886]: I0219 20:59:50.738522 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:50 crc kubenswrapper[4886]: I0219 20:59:50.738592 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:50 crc kubenswrapper[4886]: I0219 20:59:50.738656 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:50 crc kubenswrapper[4886]: I0219 20:59:50.738679 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:51 crc kubenswrapper[4886]: I0219 20:59:51.503003 4886 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 19 20:59:51 crc kubenswrapper[4886]: I0219 20:59:51.503101 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 19 20:59:51 crc kubenswrapper[4886]: I0219 20:59:51.555877 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 11:16:44.030704794 +0000 UTC Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.071181 4886 trace.go:236] Trace[1590882228]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 20:59:37.756) (total time: 14314ms): Feb 19 20:59:52 crc kubenswrapper[4886]: Trace[1590882228]: ---"Objects listed" error: 14314ms (20:59:52.071) Feb 19 20:59:52 crc kubenswrapper[4886]: Trace[1590882228]: [14.314566267s] [14.314566267s] END Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.071240 4886 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.072188 4886 trace.go:236] Trace[305014287]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 20:59:38.605) (total time: 13467ms): Feb 19 20:59:52 crc kubenswrapper[4886]: Trace[305014287]: ---"Objects listed" error: 13467ms (20:59:52.072) Feb 19 20:59:52 crc kubenswrapper[4886]: Trace[305014287]: [13.467073564s] [13.467073564s] END Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.072242 4886 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.073678 4886 trace.go:236] Trace[662893889]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 20:59:37.648) (total time: 14424ms): Feb 19 20:59:52 crc kubenswrapper[4886]: Trace[662893889]: ---"Objects listed" error: 14424ms (20:59:52.073) Feb 19 20:59:52 crc kubenswrapper[4886]: Trace[662893889]: [14.424994265s] [14.424994265s] END Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.073714 4886 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.074050 4886 trace.go:236] Trace[332188043]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 20:59:38.741) (total time: 13332ms): Feb 19 20:59:52 crc kubenswrapper[4886]: Trace[332188043]: ---"Objects listed" error: 13332ms (20:59:52.073) Feb 19 20:59:52 crc kubenswrapper[4886]: Trace[332188043]: [13.332655378s] [13.332655378s] END Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.074086 4886 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.074058 4886 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.488437 4886 apiserver.go:52] "Watching apiserver" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.556294 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 03:10:55.023863615 +0000 UTC Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.647870 4886 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.648311 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.648819 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.648904 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.649011 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.649378 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 20:59:52 crc kubenswrapper[4886]: E0219 20:59:52.649370 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 20:59:52 crc kubenswrapper[4886]: E0219 20:59:52.649364 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.649666 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 20:59:52 crc kubenswrapper[4886]: E0219 20:59:52.649737 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.649807 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.654299 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.654383 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.654390 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.654481 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.654538 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.655004 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.655421 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.656027 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.661539 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.700466 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.706459 4886 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.707937 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.712550 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.716429 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.718959 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.726864 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.738639 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.744603 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.745276 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.747933 4886 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666" exitCode=255 Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.748002 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666"} Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.748168 4886 scope.go:117] "RemoveContainer" containerID="8a39f577177775068bf27c0345183eb150c3b194285e0752b6588e4704147bf7" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.750432 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:52 crc kubenswrapper[4886]: E0219 20:59:52.755392 4886 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.763074 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.775074 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778013 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778045 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778065 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778088 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778348 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778377 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778403 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778422 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778441 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778466 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778484 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778505 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778530 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778577 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778595 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778610 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778643 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778699 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778716 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778719 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778753 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778771 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778787 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778810 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: E0219 20:59:52.778845 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 20:59:53.278822975 +0000 UTC m=+23.906666135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778876 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778904 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778929 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778954 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778978 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.778999 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.779020 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.779041 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.779062 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.779084 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.779104 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.779126 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.779153 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.779143 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.779180 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.779185 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.779458 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.779550 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.779623 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.779205 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.779720 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.779742 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.779737 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.779761 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.779933 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.779939 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.779973 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.780006 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.780038 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.780082 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.780154 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.780178 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.780180 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.780201 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.780229 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.780248 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.780489 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.779467 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.780253 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.780941 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781105 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781159 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781192 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781217 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781242 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781289 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781296 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781312 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781209 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781364 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781397 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781460 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781492 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.779437 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781518 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781545 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781570 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781595 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781619 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781616 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781644 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781671 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781696 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781721 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781750 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781791 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781818 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781843 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781871 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781896 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781921 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781932 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.781946 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.782162 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.782162 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.782249 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.782401 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.782450 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.782505 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.782556 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.782602 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.782614 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.782653 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.782621 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.782708 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.782723 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.782759 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.782762 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.782850 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.782853 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.782892 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.782899 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.782942 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.783049 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.783110 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.783129 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.783166 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.783309 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.783314 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.783423 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.783446 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.783490 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.783528 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.783563 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.783602 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.783639 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.783653 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.783672 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.783720 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.783758 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.783760 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.783791 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.783825 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.783859 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.783892 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.783944 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.783977 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.784436 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.784491 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.784526 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.784561 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.784593 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.784624 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.784657 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.784696 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.784720 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.784751 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.784767 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.784802 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.784833 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.784840 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.784893 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.785069 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.785135 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.785171 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.785207 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.785228 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.785361 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.785451 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.785469 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.785485 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.785531 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.785566 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.785614 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.785774 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.785819 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.785880 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.785933 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.785965 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.785977 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.786026 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.786080 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.786131 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.786182 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.786251 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.786331 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.786380 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.786431 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.786479 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.786528 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.786604 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.786657 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.786702 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.786752 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.786809 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.786855 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.786904 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.786957 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787006 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787058 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787107 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787164 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787215 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787291 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787342 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787382 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787415 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787453 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787486 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787520 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787554 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787585 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787616 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787649 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787723 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787759 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787795 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787827 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787860 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787893 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787927 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787961 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788009 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788087 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788132 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788171 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788213 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788254 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788338 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788375 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788409 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788444 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788480 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788512 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788551 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788585 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788622 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788655 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788690 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788725 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788758 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788793 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788829 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788862 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788897 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788935 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788970 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.789006 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.789078 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.789122 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.789161 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.789201 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.789242 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.789305 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.789341 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.789381 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.789608 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.789564 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790012 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790060 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790102 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790151 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790203 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790357 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790385 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790420 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790452 4886 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790481 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790505 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790534 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790562 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790592 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790621 4886 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790642 4886 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790662 4886 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790680 4886 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790698 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790911 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790937 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790958 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790979 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.791003 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.791028 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.791056 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.791736 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793134 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793172 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793188 4886 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793206 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793217 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793230 4886 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793242 4886 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793252 4886 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793278 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793288 4886 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793300 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793309 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793320 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793330 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793340 4886 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793351 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793360 4886 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793370 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793379 4886 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793389 4886 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793400 4886 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793410 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793419 4886 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793429 4886 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.786138 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.786413 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.786702 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.786801 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787177 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787396 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787595 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.787797 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788162 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788557 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.788851 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.789215 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.789400 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.789432 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.789731 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.789742 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790301 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790630 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790715 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790873 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.790977 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.791427 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.791449 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.791511 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.791212 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.791663 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.792128 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.792415 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.792495 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.792689 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.792718 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.792731 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.792782 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.792838 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.792955 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793096 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793153 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793487 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793545 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793714 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793720 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: E0219 20:59:52.793861 4886 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 20:59:52 crc kubenswrapper[4886]: E0219 20:59:52.797593 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 20:59:53.297568985 +0000 UTC m=+23.925412045 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.793938 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.794007 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.794845 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.794938 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.795012 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.795033 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.795041 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.795093 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.795228 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.795269 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.795507 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.795615 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.795639 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.795720 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.795739 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.795739 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.795887 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.795982 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.796055 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.796122 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.796154 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.796387 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.796457 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.796553 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.796634 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.796701 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.796859 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.797061 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: E0219 20:59:52.797694 4886 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 20:59:52 crc kubenswrapper[4886]: E0219 20:59:52.797915 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 20:59:53.297897973 +0000 UTC m=+23.925741023 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.797933 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.798023 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.798201 4886 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.798362 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.798376 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.798610 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.798729 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.798819 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.799530 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.799706 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.799828 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.800025 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.800025 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.800527 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.800973 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.801370 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.801896 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.802646 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.803105 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.803695 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.803850 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.803848 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.803863 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.804447 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.804598 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.804601 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.804802 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.804903 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.804931 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.806093 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.806194 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.807567 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.808125 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.810398 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.810541 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.818947 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.819384 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.819650 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.819825 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.819980 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.820030 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.820193 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.822743 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.822994 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:52 crc kubenswrapper[4886]: E0219 20:59:52.823180 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 20:59:52 crc kubenswrapper[4886]: E0219 20:59:52.823199 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 20:59:52 crc kubenswrapper[4886]: E0219 20:59:52.823211 4886 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 20:59:52 crc kubenswrapper[4886]: E0219 20:59:52.823279 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 20:59:53.323248725 +0000 UTC m=+23.951091775 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 20:59:52 crc kubenswrapper[4886]: E0219 20:59:52.823881 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 20:59:52 crc kubenswrapper[4886]: E0219 20:59:52.823912 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 20:59:52 crc kubenswrapper[4886]: E0219 20:59:52.823951 4886 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 20:59:52 crc kubenswrapper[4886]: E0219 20:59:52.824033 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 20:59:53.324014973 +0000 UTC m=+23.951858023 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.824336 4886 scope.go:117] "RemoveContainer" containerID="fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666" Feb 19 20:59:52 crc kubenswrapper[4886]: E0219 20:59:52.824579 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.824909 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.826670 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.826758 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.826823 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.827320 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.827565 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.828036 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.828088 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.829390 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.830150 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.832691 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.832894 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.832778 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.833221 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.833442 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.833718 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.833790 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.834452 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.834542 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.834823 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.835799 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.835925 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.835960 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.835989 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.836046 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.836151 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.836240 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.836428 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.836492 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.836715 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.836924 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.836924 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.837000 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.837009 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.837392 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.837469 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.837570 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.837580 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.837743 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.837992 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.838102 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.838137 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.840155 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.846810 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.848875 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.849355 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.859934 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.865670 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.869041 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.869364 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894505 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894564 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894641 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894652 4886 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894706 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894709 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894750 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894765 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894778 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894791 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894805 4886 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894817 4886 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894831 4886 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894843 4886 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894854 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894866 4886 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894878 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894889 4886 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894900 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894912 4886 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894923 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894935 4886 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894947 4886 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894957 4886 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894968 4886 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894980 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.894993 4886 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895004 4886 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895017 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895028 4886 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895039 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895050 4886 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895061 4886 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895072 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895084 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895095 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895105 4886 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895118 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895129 4886 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895140 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895151 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895162 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895190 4886 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895201 4886 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895212 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895223 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895234 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895248 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895281 4886 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895297 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895313 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895328 4886 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895343 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895357 4886 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895368 4886 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895380 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895392 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895404 4886 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895416 4886 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895427 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895438 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895451 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895462 4886 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895474 4886 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895486 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895497 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895509 4886 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895520 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895533 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895544 4886 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895555 4886 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895567 4886 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895578 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895589 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895600 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895612 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895624 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895638 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895651 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895663 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895674 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895686 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895698 4886 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895709 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895721 4886 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895733 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895745 4886 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895758 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895769 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895781 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895792 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895803 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895843 4886 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895854 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895866 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895877 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895888 4886 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895899 4886 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895910 4886 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895922 4886 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895934 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895945 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895956 4886 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895968 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895979 4886 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.895990 4886 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896005 4886 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896017 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896028 4886 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896039 4886 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896051 4886 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896063 4886 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896074 4886 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896085 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896096 4886 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896107 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896118 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896129 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896140 4886 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896152 4886 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896187 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896198 4886 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896209 4886 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896220 4886 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896232 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896243 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896254 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896283 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896294 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896307 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896318 4886 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896328 4886 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896339 4886 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896351 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896363 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896374 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896385 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896396 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896408 4886 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896421 4886 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896433 4886 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896444 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896455 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896468 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896479 4886 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896490 4886 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896501 4886 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896512 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896536 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896547 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896558 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.896572 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.965370 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.975468 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 20:59:52 crc kubenswrapper[4886]: I0219 20:59:52.982668 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 20:59:52 crc kubenswrapper[4886]: W0219 20:59:52.986938 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-fa36f63c34e68c5a174d7f5d8a8656139845484e82f85b220479392a3baab8f9 WatchSource:0}: Error finding container fa36f63c34e68c5a174d7f5d8a8656139845484e82f85b220479392a3baab8f9: Status 404 returned error can't find the container with id fa36f63c34e68c5a174d7f5d8a8656139845484e82f85b220479392a3baab8f9 Feb 19 20:59:52 crc kubenswrapper[4886]: W0219 20:59:52.998161 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-e3fd78a9e9e970757eefabc9230a57da3ec8f2a42e2160775b297cb9bc4de1d2 WatchSource:0}: Error finding container e3fd78a9e9e970757eefabc9230a57da3ec8f2a42e2160775b297cb9bc4de1d2: Status 404 returned error can't find the container with id e3fd78a9e9e970757eefabc9230a57da3ec8f2a42e2160775b297cb9bc4de1d2 Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.298863 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.298940 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.298959 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 20:59:53 crc kubenswrapper[4886]: E0219 20:59:53.299055 4886 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 20:59:53 crc kubenswrapper[4886]: E0219 20:59:53.299136 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 20:59:54.299108133 +0000 UTC m=+24.926951183 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 20:59:53 crc kubenswrapper[4886]: E0219 20:59:53.299138 4886 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 20:59:53 crc kubenswrapper[4886]: E0219 20:59:53.299161 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 20:59:54.299154114 +0000 UTC m=+24.926997164 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 20:59:53 crc kubenswrapper[4886]: E0219 20:59:53.299216 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 20:59:54.299198855 +0000 UTC m=+24.927041915 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.399387 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.399430 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 20:59:53 crc kubenswrapper[4886]: E0219 20:59:53.399560 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 20:59:53 crc kubenswrapper[4886]: E0219 20:59:53.399580 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 20:59:53 crc kubenswrapper[4886]: E0219 20:59:53.399597 4886 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 20:59:53 crc kubenswrapper[4886]: E0219 20:59:53.399644 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 20:59:54.399628928 +0000 UTC m=+25.027471988 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 20:59:53 crc kubenswrapper[4886]: E0219 20:59:53.399560 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 20:59:53 crc kubenswrapper[4886]: E0219 20:59:53.399669 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 20:59:53 crc kubenswrapper[4886]: E0219 20:59:53.399681 4886 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 20:59:53 crc kubenswrapper[4886]: E0219 20:59:53.399723 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 20:59:54.39971168 +0000 UTC m=+25.027554730 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.406280 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.407576 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.407606 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.407617 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.407688 4886 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.414957 4886 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.415067 4886 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.415911 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.415994 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.416057 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.416129 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.416186 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:53Z","lastTransitionTime":"2026-02-19T20:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:53 crc kubenswrapper[4886]: E0219 20:59:53.427391 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.430253 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.430303 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.430313 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.430327 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.430339 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:53Z","lastTransitionTime":"2026-02-19T20:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:53 crc kubenswrapper[4886]: E0219 20:59:53.438221 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.442211 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.442252 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.442275 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.442292 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.442303 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:53Z","lastTransitionTime":"2026-02-19T20:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:53 crc kubenswrapper[4886]: E0219 20:59:53.453752 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.464953 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.464994 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.465004 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.465017 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.465029 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:53Z","lastTransitionTime":"2026-02-19T20:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:53 crc kubenswrapper[4886]: E0219 20:59:53.475968 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.479064 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.479105 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.479117 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.479133 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.479144 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:53Z","lastTransitionTime":"2026-02-19T20:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:53 crc kubenswrapper[4886]: E0219 20:59:53.503120 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:53 crc kubenswrapper[4886]: E0219 20:59:53.503292 4886 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.504662 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.504696 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.504720 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.504739 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.504752 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:53Z","lastTransitionTime":"2026-02-19T20:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.557157 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 00:27:52.54517235 +0000 UTC Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.606846 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.607084 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.607146 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.607215 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.607300 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:53Z","lastTransitionTime":"2026-02-19T20:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.678416 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-w7m4j"] Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.678906 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-lngzl"] Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.679031 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-w7m4j" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.679667 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lngzl" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.682865 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.684033 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.684215 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.684278 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.684615 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.684630 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.684664 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.691095 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.702550 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a39f577177775068bf27c0345183eb150c3b194285e0752b6588e4704147bf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:36Z\\\",\\\"message\\\":\\\"W0219 20:59:35.241953 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 20:59:35.242308 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771534775 cert, and key in /tmp/serving-cert-1150042057/serving-signer.crt, /tmp/serving-cert-1150042057/serving-signer.key\\\\nI0219 20:59:35.800434 1 observer_polling.go:159] Starting file observer\\\\nW0219 20:59:35.806726 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 20:59:35.806974 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:35.811372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1150042057/tls.crt::/tmp/serving-cert-1150042057/tls.key\\\\\\\"\\\\nF0219 20:59:35.984935 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.709624 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.709662 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.709670 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.709684 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.709693 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:53Z","lastTransitionTime":"2026-02-19T20:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.713095 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.721571 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.730476 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.739553 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.753003 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc"} Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.753036 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d"} Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.753046 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fa36f63c34e68c5a174d7f5d8a8656139845484e82f85b220479392a3baab8f9"} Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.754349 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.755059 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.757910 4886 scope.go:117] "RemoveContainer" containerID="fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.757997 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"58f86c82ed7b918c32657b03c23caa5e04bb5473be8d2e50554ee0a37148bffa"} Feb 19 20:59:53 crc kubenswrapper[4886]: E0219 20:59:53.758082 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.759352 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06"} Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.759377 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e3fd78a9e9e970757eefabc9230a57da3ec8f2a42e2160775b297cb9bc4de1d2"} Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.771696 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.790696 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.800522 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.807780 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8ee2e118-e60c-497a-bebd-d10319626e73-host\") pod \"node-ca-lngzl\" (UID: \"8ee2e118-e60c-497a-bebd-d10319626e73\") " pod="openshift-image-registry/node-ca-lngzl" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.807810 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5fbb997b-bcbe-47fc-99ff-eb1e6b405954-hosts-file\") pod \"node-resolver-w7m4j\" (UID: \"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\") " pod="openshift-dns/node-resolver-w7m4j" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.807827 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm9cj\" (UniqueName: \"kubernetes.io/projected/5fbb997b-bcbe-47fc-99ff-eb1e6b405954-kube-api-access-dm9cj\") pod \"node-resolver-w7m4j\" (UID: \"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\") " pod="openshift-dns/node-resolver-w7m4j" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.807843 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8ee2e118-e60c-497a-bebd-d10319626e73-serviceca\") pod \"node-ca-lngzl\" (UID: \"8ee2e118-e60c-497a-bebd-d10319626e73\") " pod="openshift-image-registry/node-ca-lngzl" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.807860 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs752\" (UniqueName: \"kubernetes.io/projected/8ee2e118-e60c-497a-bebd-d10319626e73-kube-api-access-bs752\") pod \"node-ca-lngzl\" (UID: \"8ee2e118-e60c-497a-bebd-d10319626e73\") " pod="openshift-image-registry/node-ca-lngzl" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.812284 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.812326 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.812335 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.812349 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.812359 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:53Z","lastTransitionTime":"2026-02-19T20:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.814977 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.832933 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.845371 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.872033 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.883981 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.892113 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.902072 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.908423 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8ee2e118-e60c-497a-bebd-d10319626e73-host\") pod \"node-ca-lngzl\" (UID: \"8ee2e118-e60c-497a-bebd-d10319626e73\") " pod="openshift-image-registry/node-ca-lngzl" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.908485 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5fbb997b-bcbe-47fc-99ff-eb1e6b405954-hosts-file\") pod \"node-resolver-w7m4j\" (UID: \"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\") " pod="openshift-dns/node-resolver-w7m4j" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.908534 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm9cj\" (UniqueName: \"kubernetes.io/projected/5fbb997b-bcbe-47fc-99ff-eb1e6b405954-kube-api-access-dm9cj\") pod \"node-resolver-w7m4j\" (UID: \"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\") " pod="openshift-dns/node-resolver-w7m4j" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.908560 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8ee2e118-e60c-497a-bebd-d10319626e73-serviceca\") pod \"node-ca-lngzl\" (UID: \"8ee2e118-e60c-497a-bebd-d10319626e73\") " pod="openshift-image-registry/node-ca-lngzl" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.908581 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs752\" (UniqueName: \"kubernetes.io/projected/8ee2e118-e60c-497a-bebd-d10319626e73-kube-api-access-bs752\") pod \"node-ca-lngzl\" (UID: \"8ee2e118-e60c-497a-bebd-d10319626e73\") " pod="openshift-image-registry/node-ca-lngzl" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.908588 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8ee2e118-e60c-497a-bebd-d10319626e73-host\") pod \"node-ca-lngzl\" (UID: \"8ee2e118-e60c-497a-bebd-d10319626e73\") " pod="openshift-image-registry/node-ca-lngzl" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.908838 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5fbb997b-bcbe-47fc-99ff-eb1e6b405954-hosts-file\") pod \"node-resolver-w7m4j\" (UID: \"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\") " pod="openshift-dns/node-resolver-w7m4j" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.909963 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.910640 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8ee2e118-e60c-497a-bebd-d10319626e73-serviceca\") pod \"node-ca-lngzl\" (UID: \"8ee2e118-e60c-497a-bebd-d10319626e73\") " pod="openshift-image-registry/node-ca-lngzl" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.913728 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.913761 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.913771 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.913785 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.913794 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:53Z","lastTransitionTime":"2026-02-19T20:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.922526 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.928579 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm9cj\" (UniqueName: \"kubernetes.io/projected/5fbb997b-bcbe-47fc-99ff-eb1e6b405954-kube-api-access-dm9cj\") pod \"node-resolver-w7m4j\" (UID: \"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\") " pod="openshift-dns/node-resolver-w7m4j" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.928624 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs752\" (UniqueName: \"kubernetes.io/projected/8ee2e118-e60c-497a-bebd-d10319626e73-kube-api-access-bs752\") pod \"node-ca-lngzl\" (UID: \"8ee2e118-e60c-497a-bebd-d10319626e73\") " pod="openshift-image-registry/node-ca-lngzl" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.933903 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.994744 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-w7m4j" Feb 19 20:59:53 crc kubenswrapper[4886]: I0219 20:59:53.999630 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lngzl" Feb 19 20:59:54 crc kubenswrapper[4886]: W0219 20:59:54.006282 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fbb997b_bcbe_47fc_99ff_eb1e6b405954.slice/crio-8c5cfb443cd17dd095e1a0edc1d482deda39cccd6202e9e4ac57767d8da19b7b WatchSource:0}: Error finding container 8c5cfb443cd17dd095e1a0edc1d482deda39cccd6202e9e4ac57767d8da19b7b: Status 404 returned error can't find the container with id 8c5cfb443cd17dd095e1a0edc1d482deda39cccd6202e9e4ac57767d8da19b7b Feb 19 20:59:54 crc kubenswrapper[4886]: W0219 20:59:54.012081 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ee2e118_e60c_497a_bebd_d10319626e73.slice/crio-cb9f090d2cbb4a56f7f47f601fdb23633ff70737acbd972ce8ad8f1d1be8904f WatchSource:0}: Error finding container cb9f090d2cbb4a56f7f47f601fdb23633ff70737acbd972ce8ad8f1d1be8904f: Status 404 returned error can't find the container with id cb9f090d2cbb4a56f7f47f601fdb23633ff70737acbd972ce8ad8f1d1be8904f Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.014997 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.015034 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.015043 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.015059 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.015072 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:54Z","lastTransitionTime":"2026-02-19T20:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.067012 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-6stm5"] Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.067363 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-rnffz"] Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.067544 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.067946 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.069008 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.069653 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.069905 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.070017 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.070204 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.070917 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.070950 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.071057 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.071167 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.071492 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.099923 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.118084 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.118120 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.118130 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.118145 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.118157 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:54Z","lastTransitionTime":"2026-02-19T20:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.120972 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.130884 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.144333 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.169600 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.184871 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.209038 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.211226 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-host-run-netns\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.211292 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-host-var-lib-cni-multus\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.211314 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/83f8fca5-68c6-4300-b2d8-64a58bf92a64-multus-daemon-config\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.211330 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-etc-kubernetes\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.211346 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b096c32d-4192-4529-bc55-b05d09004007-mcd-auth-proxy-config\") pod \"machine-config-daemon-6stm5\" (UID: \"b096c32d-4192-4529-bc55-b05d09004007\") " pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.211361 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-multus-cni-dir\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.211378 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-host-run-k8s-cni-cncf-io\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.211396 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fst9z\" (UniqueName: \"kubernetes.io/projected/b096c32d-4192-4529-bc55-b05d09004007-kube-api-access-fst9z\") pod \"machine-config-daemon-6stm5\" (UID: \"b096c32d-4192-4529-bc55-b05d09004007\") " pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.211413 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-host-var-lib-kubelet\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.211430 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-multus-socket-dir-parent\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.211445 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-host-run-multus-certs\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.211461 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-os-release\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.211473 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-host-var-lib-cni-bin\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.211486 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-hostroot\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.211514 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmk4m\" (UniqueName: \"kubernetes.io/projected/83f8fca5-68c6-4300-b2d8-64a58bf92a64-kube-api-access-jmk4m\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.211559 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/83f8fca5-68c6-4300-b2d8-64a58bf92a64-cni-binary-copy\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.211589 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-multus-conf-dir\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.211619 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-cnibin\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.211653 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-system-cni-dir\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.211669 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b096c32d-4192-4529-bc55-b05d09004007-proxy-tls\") pod \"machine-config-daemon-6stm5\" (UID: \"b096c32d-4192-4529-bc55-b05d09004007\") " pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.211689 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b096c32d-4192-4529-bc55-b05d09004007-rootfs\") pod \"machine-config-daemon-6stm5\" (UID: \"b096c32d-4192-4529-bc55-b05d09004007\") " pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.221999 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.222033 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.222043 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.222057 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.222066 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:54Z","lastTransitionTime":"2026-02-19T20:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.230582 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.250423 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.264685 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.273075 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.299593 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.311962 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312034 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-cnibin\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312057 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-system-cni-dir\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312072 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b096c32d-4192-4529-bc55-b05d09004007-proxy-tls\") pod \"machine-config-daemon-6stm5\" (UID: \"b096c32d-4192-4529-bc55-b05d09004007\") " pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312088 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b096c32d-4192-4529-bc55-b05d09004007-rootfs\") pod \"machine-config-daemon-6stm5\" (UID: \"b096c32d-4192-4529-bc55-b05d09004007\") " pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312112 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312127 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-host-run-netns\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312142 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-host-var-lib-cni-multus\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312158 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/83f8fca5-68c6-4300-b2d8-64a58bf92a64-multus-daemon-config\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312174 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-etc-kubernetes\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312197 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b096c32d-4192-4529-bc55-b05d09004007-mcd-auth-proxy-config\") pod \"machine-config-daemon-6stm5\" (UID: \"b096c32d-4192-4529-bc55-b05d09004007\") " pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312214 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-multus-cni-dir\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312230 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-host-run-k8s-cni-cncf-io\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: E0219 20:59:54.312256 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 20:59:56.312232497 +0000 UTC m=+26.940075547 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312295 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-host-run-k8s-cni-cncf-io\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312299 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-system-cni-dir\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312312 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fst9z\" (UniqueName: \"kubernetes.io/projected/b096c32d-4192-4529-bc55-b05d09004007-kube-api-access-fst9z\") pod \"machine-config-daemon-6stm5\" (UID: \"b096c32d-4192-4529-bc55-b05d09004007\") " pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312380 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-host-var-lib-kubelet\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312402 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-multus-socket-dir-parent\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312419 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-host-run-multus-certs\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312434 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-os-release\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312450 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-host-var-lib-cni-bin\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312464 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-hostroot\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312497 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312517 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmk4m\" (UniqueName: \"kubernetes.io/projected/83f8fca5-68c6-4300-b2d8-64a58bf92a64-kube-api-access-jmk4m\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312534 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/83f8fca5-68c6-4300-b2d8-64a58bf92a64-cni-binary-copy\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312550 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-multus-conf-dir\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312553 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-host-var-lib-cni-multus\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312597 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-host-run-netns\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: E0219 20:59:54.312623 4886 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312626 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-etc-kubernetes\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: E0219 20:59:54.312661 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 20:59:56.312653668 +0000 UTC m=+26.940496718 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312666 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-cnibin\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312689 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-host-var-lib-kubelet\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312774 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-multus-socket-dir-parent\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312799 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-host-run-multus-certs\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312834 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-os-release\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312936 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-multus-cni-dir\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312950 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-host-var-lib-cni-bin\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.312981 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-hostroot\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: E0219 20:59:54.313030 4886 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 20:59:54 crc kubenswrapper[4886]: E0219 20:59:54.313072 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 20:59:56.313056188 +0000 UTC m=+26.940899238 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.313296 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/83f8fca5-68c6-4300-b2d8-64a58bf92a64-multus-conf-dir\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.313328 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b096c32d-4192-4529-bc55-b05d09004007-rootfs\") pod \"machine-config-daemon-6stm5\" (UID: \"b096c32d-4192-4529-bc55-b05d09004007\") " pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.313416 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/83f8fca5-68c6-4300-b2d8-64a58bf92a64-multus-daemon-config\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.313891 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b096c32d-4192-4529-bc55-b05d09004007-mcd-auth-proxy-config\") pod \"machine-config-daemon-6stm5\" (UID: \"b096c32d-4192-4529-bc55-b05d09004007\") " pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.314221 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/83f8fca5-68c6-4300-b2d8-64a58bf92a64-cni-binary-copy\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.317540 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.317717 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b096c32d-4192-4529-bc55-b05d09004007-proxy-tls\") pod \"machine-config-daemon-6stm5\" (UID: \"b096c32d-4192-4529-bc55-b05d09004007\") " pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.323593 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.323629 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.323638 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.323653 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.323664 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:54Z","lastTransitionTime":"2026-02-19T20:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.331809 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fst9z\" (UniqueName: \"kubernetes.io/projected/b096c32d-4192-4529-bc55-b05d09004007-kube-api-access-fst9z\") pod \"machine-config-daemon-6stm5\" (UID: \"b096c32d-4192-4529-bc55-b05d09004007\") " pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.331842 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmk4m\" (UniqueName: \"kubernetes.io/projected/83f8fca5-68c6-4300-b2d8-64a58bf92a64-kube-api-access-jmk4m\") pod \"multus-rnffz\" (UID: \"83f8fca5-68c6-4300-b2d8-64a58bf92a64\") " pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.338386 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.351951 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.365425 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.384649 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.387613 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rnffz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.396026 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.398140 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 20:59:54 crc kubenswrapper[4886]: W0219 20:59:54.398670 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83f8fca5_68c6_4300_b2d8_64a58bf92a64.slice/crio-42dec9ef02ce7cc96b9dafea68cde613e558ef8da3029e9a9853b78cf8e0edbb WatchSource:0}: Error finding container 42dec9ef02ce7cc96b9dafea68cde613e558ef8da3029e9a9853b78cf8e0edbb: Status 404 returned error can't find the container with id 42dec9ef02ce7cc96b9dafea68cde613e558ef8da3029e9a9853b78cf8e0edbb Feb 19 20:59:54 crc kubenswrapper[4886]: W0219 20:59:54.409357 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb096c32d_4192_4529_bc55_b05d09004007.slice/crio-c68c2eaa7c9d14f18480ed46d5622e481111219b807f469127f778c12b63a9d9 WatchSource:0}: Error finding container c68c2eaa7c9d14f18480ed46d5622e481111219b807f469127f778c12b63a9d9: Status 404 returned error can't find the container with id c68c2eaa7c9d14f18480ed46d5622e481111219b807f469127f778c12b63a9d9 Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.413025 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.413069 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 20:59:54 crc kubenswrapper[4886]: E0219 20:59:54.413172 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 20:59:54 crc kubenswrapper[4886]: E0219 20:59:54.413185 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 20:59:54 crc kubenswrapper[4886]: E0219 20:59:54.413189 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 20:59:54 crc kubenswrapper[4886]: E0219 20:59:54.413212 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 20:59:54 crc kubenswrapper[4886]: E0219 20:59:54.413222 4886 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 20:59:54 crc kubenswrapper[4886]: E0219 20:59:54.413283 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 20:59:56.413247144 +0000 UTC m=+27.041090194 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 20:59:54 crc kubenswrapper[4886]: E0219 20:59:54.413196 4886 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 20:59:54 crc kubenswrapper[4886]: E0219 20:59:54.413320 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 20:59:56.413313126 +0000 UTC m=+27.041156176 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.419645 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.425852 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.425895 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.425908 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.425924 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.425938 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:54Z","lastTransitionTime":"2026-02-19T20:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.434977 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.440307 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-vfjj2"] Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.441105 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.443511 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nclwh"] Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.443843 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.443974 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.444619 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.450611 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.450730 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.450881 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.450950 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.450998 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.451112 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.451210 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.470861 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.486890 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.497217 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.509210 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.520459 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.528539 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.528578 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.528589 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.528604 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.528615 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:54Z","lastTransitionTime":"2026-02-19T20:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.534308 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.556903 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.557969 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 19:39:55.42063123 +0000 UTC Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.572765 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.588939 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.602899 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.603154 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.603193 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 20:59:54 crc kubenswrapper[4886]: E0219 20:59:54.603579 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 20:59:54 crc kubenswrapper[4886]: E0219 20:59:54.603668 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 20:59:54 crc kubenswrapper[4886]: E0219 20:59:54.603714 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.606461 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.607196 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.608389 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.609012 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.610046 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.610629 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.611332 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.613574 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.614882 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-log-socket\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.615011 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87d8f125-379b-4e5a-bedc-b55cf9edb00a-ovnkube-config\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.615120 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-run-systemd\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.615223 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmjqf\" (UniqueName: \"kubernetes.io/projected/87d8f125-379b-4e5a-bedc-b55cf9edb00a-kube-api-access-pmjqf\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.615398 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b250eb86-03a2-41d3-b71b-2264cc0b285b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vfjj2\" (UID: \"b250eb86-03a2-41d3-b71b-2264cc0b285b\") " pod="openshift-multus/multus-additional-cni-plugins-vfjj2" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.615598 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-slash\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.615734 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-cni-netd\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.615865 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-node-log\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.616010 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b250eb86-03a2-41d3-b71b-2264cc0b285b-os-release\") pod \"multus-additional-cni-plugins-vfjj2\" (UID: \"b250eb86-03a2-41d3-b71b-2264cc0b285b\") " pod="openshift-multus/multus-additional-cni-plugins-vfjj2" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.616181 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-run-netns\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.616359 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b250eb86-03a2-41d3-b71b-2264cc0b285b-cnibin\") pod \"multus-additional-cni-plugins-vfjj2\" (UID: \"b250eb86-03a2-41d3-b71b-2264cc0b285b\") " pod="openshift-multus/multus-additional-cni-plugins-vfjj2" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.616519 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-run-ovn\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.616676 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.616818 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b250eb86-03a2-41d3-b71b-2264cc0b285b-system-cni-dir\") pod \"multus-additional-cni-plugins-vfjj2\" (UID: \"b250eb86-03a2-41d3-b71b-2264cc0b285b\") " pod="openshift-multus/multus-additional-cni-plugins-vfjj2" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.616978 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b250eb86-03a2-41d3-b71b-2264cc0b285b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vfjj2\" (UID: \"b250eb86-03a2-41d3-b71b-2264cc0b285b\") " pod="openshift-multus/multus-additional-cni-plugins-vfjj2" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.617154 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-var-lib-openvswitch\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.617281 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-run-openvswitch\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.617406 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87d8f125-379b-4e5a-bedc-b55cf9edb00a-ovn-node-metrics-cert\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.617542 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-systemd-units\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.617669 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87d8f125-379b-4e5a-bedc-b55cf9edb00a-env-overrides\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.617767 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.617854 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-kubelet\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.617954 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-cni-bin\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.618068 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmxx6\" (UniqueName: \"kubernetes.io/projected/b250eb86-03a2-41d3-b71b-2264cc0b285b-kube-api-access-hmxx6\") pod \"multus-additional-cni-plugins-vfjj2\" (UID: \"b250eb86-03a2-41d3-b71b-2264cc0b285b\") " pod="openshift-multus/multus-additional-cni-plugins-vfjj2" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.618180 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-run-ovn-kubernetes\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.618311 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-etc-openvswitch\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.618430 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87d8f125-379b-4e5a-bedc-b55cf9edb00a-ovnkube-script-lib\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.618538 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b250eb86-03a2-41d3-b71b-2264cc0b285b-cni-binary-copy\") pod \"multus-additional-cni-plugins-vfjj2\" (UID: \"b250eb86-03a2-41d3-b71b-2264cc0b285b\") " pod="openshift-multus/multus-additional-cni-plugins-vfjj2" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.618715 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.619704 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.620244 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.629885 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.630534 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.630582 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.630595 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.630611 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.630623 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:54Z","lastTransitionTime":"2026-02-19T20:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.630850 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.631510 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.632133 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.632820 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.633564 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.634080 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.634786 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.636805 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.637402 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.638000 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.638787 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.639431 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.641611 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.642267 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.643243 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.643908 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.644736 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.646412 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.646888 4886 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.646987 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.652456 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.653189 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.653879 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.654168 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.655104 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.655864 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.656489 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.657191 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.659452 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.659997 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.660967 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.662092 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.662695 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.663523 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.664033 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.664912 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.665816 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.666707 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.667232 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.667739 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.668529 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.668675 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.669296 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.670159 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.689054 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.707316 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.720083 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-log-socket\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.720377 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87d8f125-379b-4e5a-bedc-b55cf9edb00a-ovnkube-config\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.720496 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-run-systemd\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.720618 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmjqf\" (UniqueName: \"kubernetes.io/projected/87d8f125-379b-4e5a-bedc-b55cf9edb00a-kube-api-access-pmjqf\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.720739 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-slash\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.720850 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-cni-netd\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.720961 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b250eb86-03a2-41d3-b71b-2264cc0b285b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vfjj2\" (UID: \"b250eb86-03a2-41d3-b71b-2264cc0b285b\") " pod="openshift-multus/multus-additional-cni-plugins-vfjj2" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.721037 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87d8f125-379b-4e5a-bedc-b55cf9edb00a-ovnkube-config\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.721043 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-node-log\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.721107 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b250eb86-03a2-41d3-b71b-2264cc0b285b-os-release\") pod \"multus-additional-cni-plugins-vfjj2\" (UID: \"b250eb86-03a2-41d3-b71b-2264cc0b285b\") " pod="openshift-multus/multus-additional-cni-plugins-vfjj2" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.721126 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-run-netns\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.721149 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b250eb86-03a2-41d3-b71b-2264cc0b285b-cnibin\") pod \"multus-additional-cni-plugins-vfjj2\" (UID: \"b250eb86-03a2-41d3-b71b-2264cc0b285b\") " pod="openshift-multus/multus-additional-cni-plugins-vfjj2" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.721167 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-run-ovn\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.721182 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.721196 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b250eb86-03a2-41d3-b71b-2264cc0b285b-system-cni-dir\") pod \"multus-additional-cni-plugins-vfjj2\" (UID: \"b250eb86-03a2-41d3-b71b-2264cc0b285b\") " pod="openshift-multus/multus-additional-cni-plugins-vfjj2" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.721211 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b250eb86-03a2-41d3-b71b-2264cc0b285b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vfjj2\" (UID: \"b250eb86-03a2-41d3-b71b-2264cc0b285b\") " pod="openshift-multus/multus-additional-cni-plugins-vfjj2" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.721230 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-var-lib-openvswitch\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.721246 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-run-openvswitch\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.721288 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87d8f125-379b-4e5a-bedc-b55cf9edb00a-ovn-node-metrics-cert\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.721309 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-systemd-units\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.721332 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87d8f125-379b-4e5a-bedc-b55cf9edb00a-env-overrides\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.721352 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-kubelet\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.721367 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-cni-bin\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.721383 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmxx6\" (UniqueName: \"kubernetes.io/projected/b250eb86-03a2-41d3-b71b-2264cc0b285b-kube-api-access-hmxx6\") pod \"multus-additional-cni-plugins-vfjj2\" (UID: \"b250eb86-03a2-41d3-b71b-2264cc0b285b\") " pod="openshift-multus/multus-additional-cni-plugins-vfjj2" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.721402 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-run-ovn-kubernetes\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.721420 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-etc-openvswitch\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.721434 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87d8f125-379b-4e5a-bedc-b55cf9edb00a-ovnkube-script-lib\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.721450 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b250eb86-03a2-41d3-b71b-2264cc0b285b-cni-binary-copy\") pod \"multus-additional-cni-plugins-vfjj2\" (UID: \"b250eb86-03a2-41d3-b71b-2264cc0b285b\") " pod="openshift-multus/multus-additional-cni-plugins-vfjj2" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.720249 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-log-socket\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.721954 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b250eb86-03a2-41d3-b71b-2264cc0b285b-cni-binary-copy\") pod \"multus-additional-cni-plugins-vfjj2\" (UID: \"b250eb86-03a2-41d3-b71b-2264cc0b285b\") " pod="openshift-multus/multus-additional-cni-plugins-vfjj2" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.722078 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-run-systemd\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.722695 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-slash\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.722705 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-node-log\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.722867 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-var-lib-openvswitch\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.722751 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-cni-netd\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.722802 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b250eb86-03a2-41d3-b71b-2264cc0b285b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vfjj2\" (UID: \"b250eb86-03a2-41d3-b71b-2264cc0b285b\") " pod="openshift-multus/multus-additional-cni-plugins-vfjj2" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.722827 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b250eb86-03a2-41d3-b71b-2264cc0b285b-os-release\") pod \"multus-additional-cni-plugins-vfjj2\" (UID: \"b250eb86-03a2-41d3-b71b-2264cc0b285b\") " pod="openshift-multus/multus-additional-cni-plugins-vfjj2" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.722848 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-run-netns\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.722857 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b250eb86-03a2-41d3-b71b-2264cc0b285b-cnibin\") pod \"multus-additional-cni-plugins-vfjj2\" (UID: \"b250eb86-03a2-41d3-b71b-2264cc0b285b\") " pod="openshift-multus/multus-additional-cni-plugins-vfjj2" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.722728 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-run-ovn\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.722906 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-run-ovn-kubernetes\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.722945 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-run-openvswitch\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.722980 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-kubelet\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.723024 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-cni-bin\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.723040 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-systemd-units\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.723044 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.723057 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b250eb86-03a2-41d3-b71b-2264cc0b285b-system-cni-dir\") pod \"multus-additional-cni-plugins-vfjj2\" (UID: \"b250eb86-03a2-41d3-b71b-2264cc0b285b\") " pod="openshift-multus/multus-additional-cni-plugins-vfjj2" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.723124 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-etc-openvswitch\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.723106 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.723264 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87d8f125-379b-4e5a-bedc-b55cf9edb00a-env-overrides\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.723674 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87d8f125-379b-4e5a-bedc-b55cf9edb00a-ovnkube-script-lib\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.724032 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b250eb86-03a2-41d3-b71b-2264cc0b285b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vfjj2\" (UID: \"b250eb86-03a2-41d3-b71b-2264cc0b285b\") " pod="openshift-multus/multus-additional-cni-plugins-vfjj2" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.727652 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87d8f125-379b-4e5a-bedc-b55cf9edb00a-ovn-node-metrics-cert\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.733757 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.733790 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.733801 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.733820 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.733831 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:54Z","lastTransitionTime":"2026-02-19T20:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.741836 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmxx6\" (UniqueName: \"kubernetes.io/projected/b250eb86-03a2-41d3-b71b-2264cc0b285b-kube-api-access-hmxx6\") pod \"multus-additional-cni-plugins-vfjj2\" (UID: \"b250eb86-03a2-41d3-b71b-2264cc0b285b\") " pod="openshift-multus/multus-additional-cni-plugins-vfjj2" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.746352 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmjqf\" (UniqueName: \"kubernetes.io/projected/87d8f125-379b-4e5a-bedc-b55cf9edb00a-kube-api-access-pmjqf\") pod \"ovnkube-node-nclwh\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.747268 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.766356 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-w7m4j" event={"ID":"5fbb997b-bcbe-47fc-99ff-eb1e6b405954","Type":"ContainerStarted","Data":"cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6"} Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.766412 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-w7m4j" event={"ID":"5fbb997b-bcbe-47fc-99ff-eb1e6b405954","Type":"ContainerStarted","Data":"8c5cfb443cd17dd095e1a0edc1d482deda39cccd6202e9e4ac57767d8da19b7b"} Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.768218 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerStarted","Data":"439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9"} Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.768248 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerStarted","Data":"2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af"} Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.768263 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerStarted","Data":"c68c2eaa7c9d14f18480ed46d5622e481111219b807f469127f778c12b63a9d9"} Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.769642 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rnffz" event={"ID":"83f8fca5-68c6-4300-b2d8-64a58bf92a64","Type":"ContainerStarted","Data":"3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10"} Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.769672 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rnffz" event={"ID":"83f8fca5-68c6-4300-b2d8-64a58bf92a64","Type":"ContainerStarted","Data":"42dec9ef02ce7cc96b9dafea68cde613e558ef8da3029e9a9853b78cf8e0edbb"} Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.771015 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lngzl" event={"ID":"8ee2e118-e60c-497a-bebd-d10319626e73","Type":"ContainerStarted","Data":"98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1"} Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.771042 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lngzl" event={"ID":"8ee2e118-e60c-497a-bebd-d10319626e73","Type":"ContainerStarted","Data":"cb9f090d2cbb4a56f7f47f601fdb23633ff70737acbd972ce8ad8f1d1be8904f"} Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.771999 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.776717 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.778930 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.789642 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: W0219 20:59:54.793372 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb250eb86_03a2_41d3_b71b_2264cc0b285b.slice/crio-77e45ecfdd467104d6ae1b3c7194fd1176dfd621de4718d9d8dee1aceb645161 WatchSource:0}: Error finding container 77e45ecfdd467104d6ae1b3c7194fd1176dfd621de4718d9d8dee1aceb645161: Status 404 returned error can't find the container with id 77e45ecfdd467104d6ae1b3c7194fd1176dfd621de4718d9d8dee1aceb645161 Feb 19 20:59:54 crc kubenswrapper[4886]: W0219 20:59:54.798925 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87d8f125_379b_4e5a_bedc_b55cf9edb00a.slice/crio-c7a0db8c82dc24886b687b7a4b2e408a2f81f598d81136afc0179e7146435526 WatchSource:0}: Error finding container c7a0db8c82dc24886b687b7a4b2e408a2f81f598d81136afc0179e7146435526: Status 404 returned error can't find the container with id c7a0db8c82dc24886b687b7a4b2e408a2f81f598d81136afc0179e7146435526 Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.804124 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.819954 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.832944 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.838321 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.838348 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.838356 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.838371 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.838381 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:54Z","lastTransitionTime":"2026-02-19T20:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.854445 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.871829 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.892612 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.943521 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.944503 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.944539 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.944552 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.944571 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.944581 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:54Z","lastTransitionTime":"2026-02-19T20:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.973926 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:54Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.982658 4886 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 20:59:54 crc kubenswrapper[4886]: I0219 20:59:54.997904 4886 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.012538 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:55Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.027161 4886 csr.go:261] certificate signing request csr-tvmb8 is approved, waiting to be issued Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.037746 4886 csr.go:257] certificate signing request csr-tvmb8 is issued Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.047727 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.047751 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.047761 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.047775 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.047784 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:55Z","lastTransitionTime":"2026-02-19T20:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.050733 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:55Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.098455 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:55Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.138534 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:55Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.149822 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.149857 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.149868 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.149885 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.149894 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:55Z","lastTransitionTime":"2026-02-19T20:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.170911 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:55Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.252360 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.252395 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.252403 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.252416 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.252424 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:55Z","lastTransitionTime":"2026-02-19T20:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.354624 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.354655 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.354663 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.354676 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.354684 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:55Z","lastTransitionTime":"2026-02-19T20:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.457601 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.457654 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.457666 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.457683 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.457696 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:55Z","lastTransitionTime":"2026-02-19T20:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.558450 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 23:22:45.943781961 +0000 UTC Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.561314 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.561358 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.561370 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.561388 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.561405 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:55Z","lastTransitionTime":"2026-02-19T20:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.668372 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.668434 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.668446 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.668462 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.668475 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:55Z","lastTransitionTime":"2026-02-19T20:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.773753 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.773786 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.773797 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.773811 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.773820 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:55Z","lastTransitionTime":"2026-02-19T20:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.775698 4886 generic.go:334] "Generic (PLEG): container finished" podID="b250eb86-03a2-41d3-b71b-2264cc0b285b" containerID="23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f" exitCode=0 Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.775801 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" event={"ID":"b250eb86-03a2-41d3-b71b-2264cc0b285b","Type":"ContainerDied","Data":"23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f"} Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.775839 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" event={"ID":"b250eb86-03a2-41d3-b71b-2264cc0b285b","Type":"ContainerStarted","Data":"77e45ecfdd467104d6ae1b3c7194fd1176dfd621de4718d9d8dee1aceb645161"} Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.778374 4886 generic.go:334] "Generic (PLEG): container finished" podID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerID="637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1" exitCode=0 Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.778426 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerDied","Data":"637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1"} Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.778456 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerStarted","Data":"c7a0db8c82dc24886b687b7a4b2e408a2f81f598d81136afc0179e7146435526"} Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.789931 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:55Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.804256 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:55Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.818575 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:55Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.836081 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:55Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.847639 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:55Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.859364 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:55Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.870907 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:55Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.876439 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.876471 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.876480 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.876494 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.876505 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:55Z","lastTransitionTime":"2026-02-19T20:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.885902 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:55Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.898654 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:55Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.914546 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:55Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.930649 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:55Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.946283 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:55Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.961382 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:55Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.972492 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:55Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.980800 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.980843 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.980852 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.980868 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.980877 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:55Z","lastTransitionTime":"2026-02-19T20:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.983099 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:55Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:55 crc kubenswrapper[4886]: I0219 20:59:55.995219 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:55Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.006412 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.015689 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.039377 4886 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-19 20:54:55 +0000 UTC, rotation deadline is 2026-12-22 09:36:49.27836396 +0000 UTC Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.039413 4886 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7332h36m53.238954278s for next certificate rotation Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.041142 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.058020 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.071028 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.081759 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.082975 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.083087 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.083149 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.083229 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.083316 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:56Z","lastTransitionTime":"2026-02-19T20:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.105653 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.134620 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.176559 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.186053 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.186090 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.186101 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.186118 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.186131 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:56Z","lastTransitionTime":"2026-02-19T20:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.213652 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.250196 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.293500 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.293765 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.293777 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.293793 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.293805 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:56Z","lastTransitionTime":"2026-02-19T20:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.293839 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.335488 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.335593 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 20:59:56 crc kubenswrapper[4886]: E0219 20:59:56.335632 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:00:00.335608676 +0000 UTC m=+30.963451746 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 20:59:56 crc kubenswrapper[4886]: E0219 20:59:56.335682 4886 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.335679 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 20:59:56 crc kubenswrapper[4886]: E0219 20:59:56.335716 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 21:00:00.335708088 +0000 UTC m=+30.963551138 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 20:59:56 crc kubenswrapper[4886]: E0219 20:59:56.335778 4886 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 20:59:56 crc kubenswrapper[4886]: E0219 20:59:56.335822 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 21:00:00.335810441 +0000 UTC m=+30.963653511 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.396055 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.396091 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.396103 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.396118 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.396128 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:56Z","lastTransitionTime":"2026-02-19T20:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.436070 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.436135 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 20:59:56 crc kubenswrapper[4886]: E0219 20:59:56.436245 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 20:59:56 crc kubenswrapper[4886]: E0219 20:59:56.436295 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 20:59:56 crc kubenswrapper[4886]: E0219 20:59:56.436300 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 20:59:56 crc kubenswrapper[4886]: E0219 20:59:56.436308 4886 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 20:59:56 crc kubenswrapper[4886]: E0219 20:59:56.436323 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 20:59:56 crc kubenswrapper[4886]: E0219 20:59:56.436338 4886 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 20:59:56 crc kubenswrapper[4886]: E0219 20:59:56.436370 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 21:00:00.436354616 +0000 UTC m=+31.064197666 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 20:59:56 crc kubenswrapper[4886]: E0219 20:59:56.436390 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 21:00:00.436381197 +0000 UTC m=+31.064224247 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.498815 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.498849 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.498858 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.498871 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.498878 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:56Z","lastTransitionTime":"2026-02-19T20:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.559571 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 11:29:38.090926801 +0000 UTC Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.600356 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.600405 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 20:59:56 crc kubenswrapper[4886]: E0219 20:59:56.600478 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.600487 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 20:59:56 crc kubenswrapper[4886]: E0219 20:59:56.600542 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 20:59:56 crc kubenswrapper[4886]: E0219 20:59:56.600585 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.601084 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.601106 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.601114 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.601126 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.601135 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:56Z","lastTransitionTime":"2026-02-19T20:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.704221 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.704280 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.704290 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.704309 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.704318 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:56Z","lastTransitionTime":"2026-02-19T20:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.782741 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerStarted","Data":"89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a"} Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.782780 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerStarted","Data":"d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379"} Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.782788 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerStarted","Data":"2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e"} Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.782797 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerStarted","Data":"8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0"} Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.783888 4886 generic.go:334] "Generic (PLEG): container finished" podID="b250eb86-03a2-41d3-b71b-2264cc0b285b" containerID="314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4" exitCode=0 Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.783915 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" event={"ID":"b250eb86-03a2-41d3-b71b-2264cc0b285b","Type":"ContainerDied","Data":"314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4"} Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.798404 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.807320 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.808930 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.808956 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.808965 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.808979 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.808989 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:56Z","lastTransitionTime":"2026-02-19T20:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.818083 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.829014 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.846762 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.860081 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.872412 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.882160 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.894230 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.907715 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.913649 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.913703 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.913712 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.913727 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.913737 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:56Z","lastTransitionTime":"2026-02-19T20:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.925541 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.936696 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.948363 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:56 crc kubenswrapper[4886]: I0219 20:59:56.963067 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:56Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.016160 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.016198 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.016211 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.016259 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.016269 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:57Z","lastTransitionTime":"2026-02-19T20:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.118163 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.118455 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.118568 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.118687 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.118787 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:57Z","lastTransitionTime":"2026-02-19T20:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.220588 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.220629 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.220638 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.220652 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.220661 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:57Z","lastTransitionTime":"2026-02-19T20:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.322694 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.322740 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.322752 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.322767 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.322781 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:57Z","lastTransitionTime":"2026-02-19T20:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.426216 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.426297 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.426314 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.426337 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.426354 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:57Z","lastTransitionTime":"2026-02-19T20:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.528909 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.528982 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.529000 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.529023 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.529040 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:57Z","lastTransitionTime":"2026-02-19T20:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.560667 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 09:34:37.616623615 +0000 UTC Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.632721 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.632774 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.632790 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.632812 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.632829 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:57Z","lastTransitionTime":"2026-02-19T20:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.734990 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.735454 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.735475 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.735502 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.735522 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:57Z","lastTransitionTime":"2026-02-19T20:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.791906 4886 generic.go:334] "Generic (PLEG): container finished" podID="b250eb86-03a2-41d3-b71b-2264cc0b285b" containerID="fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd" exitCode=0 Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.791982 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" event={"ID":"b250eb86-03a2-41d3-b71b-2264cc0b285b","Type":"ContainerDied","Data":"fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd"} Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.809624 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerStarted","Data":"f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16"} Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.809692 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerStarted","Data":"82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828"} Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.813501 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696"} Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.813705 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:57Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.837723 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:57Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.842359 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.842424 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.842444 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.842466 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.842486 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:57Z","lastTransitionTime":"2026-02-19T20:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.862904 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:57Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.886551 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:57Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.909823 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:57Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.928041 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:57Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.943910 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:57Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.945134 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.945184 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.945202 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.945226 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.945245 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:57Z","lastTransitionTime":"2026-02-19T20:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.956811 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:57Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.974814 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:57Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:57 crc kubenswrapper[4886]: I0219 20:59:57.989850 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:57Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.014134 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.030996 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.044769 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.047441 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.047527 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.047546 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.047573 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.047595 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:58Z","lastTransitionTime":"2026-02-19T20:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.076033 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.093946 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.106657 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.128724 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.146089 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.151623 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.151705 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.151720 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.151739 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.151780 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:58Z","lastTransitionTime":"2026-02-19T20:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.166945 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.187114 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.205107 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.223554 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.239457 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.253994 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.254034 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.254047 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.254065 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.254079 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:58Z","lastTransitionTime":"2026-02-19T20:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.256846 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.275138 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.290284 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.305594 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.322006 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.356760 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.356810 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.356821 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.356845 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.356859 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:58Z","lastTransitionTime":"2026-02-19T20:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.459859 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.459924 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.459950 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.459982 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.460010 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:58Z","lastTransitionTime":"2026-02-19T20:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.560909 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 08:36:53.292118783 +0000 UTC Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.563143 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.563192 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.563208 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.563234 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.563253 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:58Z","lastTransitionTime":"2026-02-19T20:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.600650 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.600697 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.600714 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 20:59:58 crc kubenswrapper[4886]: E0219 20:59:58.600865 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 20:59:58 crc kubenswrapper[4886]: E0219 20:59:58.600980 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 20:59:58 crc kubenswrapper[4886]: E0219 20:59:58.601159 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.666901 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.667000 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.667020 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.667047 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.667098 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:58Z","lastTransitionTime":"2026-02-19T20:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.769483 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.769543 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.769560 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.769585 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.769606 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:58Z","lastTransitionTime":"2026-02-19T20:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.820967 4886 generic.go:334] "Generic (PLEG): container finished" podID="b250eb86-03a2-41d3-b71b-2264cc0b285b" containerID="117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63" exitCode=0 Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.821069 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" event={"ID":"b250eb86-03a2-41d3-b71b-2264cc0b285b","Type":"ContainerDied","Data":"117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63"} Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.850293 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.870167 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.872202 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.872618 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.872667 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.872699 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.872723 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:58Z","lastTransitionTime":"2026-02-19T20:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.890595 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.904032 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.914895 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.928560 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.942469 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.955467 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.967927 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.977297 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.977351 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.977363 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.977382 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.977628 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:58Z","lastTransitionTime":"2026-02-19T20:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.982065 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:58 crc kubenswrapper[4886]: I0219 20:59:58.996809 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:58Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.009081 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:59Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.024712 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:59Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.038510 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:59Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.080439 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.080473 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.080484 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.080502 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.080709 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:59Z","lastTransitionTime":"2026-02-19T20:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.183446 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.183487 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.183498 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.183513 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.183524 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:59Z","lastTransitionTime":"2026-02-19T20:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.285782 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.285831 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.285850 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.285872 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.285888 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:59Z","lastTransitionTime":"2026-02-19T20:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.388324 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.388377 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.388394 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.388418 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.388435 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:59Z","lastTransitionTime":"2026-02-19T20:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.491108 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.491159 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.491171 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.491189 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.491202 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:59Z","lastTransitionTime":"2026-02-19T20:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.562097 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 00:26:51.425314042 +0000 UTC Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.593864 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.593934 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.593959 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.593989 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.594013 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:59Z","lastTransitionTime":"2026-02-19T20:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.697062 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.697124 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.697142 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.697164 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.697182 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:59Z","lastTransitionTime":"2026-02-19T20:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.800613 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.800670 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.800686 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.800712 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.800729 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:59Z","lastTransitionTime":"2026-02-19T20:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.840928 4886 generic.go:334] "Generic (PLEG): container finished" podID="b250eb86-03a2-41d3-b71b-2264cc0b285b" containerID="ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515" exitCode=0 Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.840988 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" event={"ID":"b250eb86-03a2-41d3-b71b-2264cc0b285b","Type":"ContainerDied","Data":"ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515"} Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.850463 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerStarted","Data":"76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548"} Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.863200 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:59Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.881806 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:59Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.903741 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.903796 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.903815 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.903840 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.903859 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T20:59:59Z","lastTransitionTime":"2026-02-19T20:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.905864 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:59Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.925632 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:59Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.950310 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:59Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.970543 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:59Z is after 2025-08-24T17:21:41Z" Feb 19 20:59:59 crc kubenswrapper[4886]: I0219 20:59:59.990038 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T20:59:59Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.007003 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.007051 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.007068 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.007093 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.007110 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:00Z","lastTransitionTime":"2026-02-19T21:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.013873 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.029903 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.049356 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.065689 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.081033 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.110830 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.110868 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.110878 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.110894 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.110906 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:00Z","lastTransitionTime":"2026-02-19T21:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.117182 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.164354 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.213706 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.213739 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.213750 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.213766 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.213775 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:00Z","lastTransitionTime":"2026-02-19T21:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.225387 4886 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.316251 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.316309 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.316321 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.316338 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.316349 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:00Z","lastTransitionTime":"2026-02-19T21:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.382424 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.382604 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.382673 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:00 crc kubenswrapper[4886]: E0219 21:00:00.382784 4886 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 21:00:00 crc kubenswrapper[4886]: E0219 21:00:00.382905 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 21:00:08.382880853 +0000 UTC m=+39.010723933 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 21:00:00 crc kubenswrapper[4886]: E0219 21:00:00.382980 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:00:08.382966645 +0000 UTC m=+39.010809735 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:00:00 crc kubenswrapper[4886]: E0219 21:00:00.382989 4886 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 21:00:00 crc kubenswrapper[4886]: E0219 21:00:00.383089 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 21:00:08.383064568 +0000 UTC m=+39.010907648 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.418479 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.418521 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.418530 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.418545 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.418557 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:00Z","lastTransitionTime":"2026-02-19T21:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.483668 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.483800 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:00 crc kubenswrapper[4886]: E0219 21:00:00.483828 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 21:00:00 crc kubenswrapper[4886]: E0219 21:00:00.483859 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 21:00:00 crc kubenswrapper[4886]: E0219 21:00:00.483870 4886 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 21:00:00 crc kubenswrapper[4886]: E0219 21:00:00.483926 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 21:00:08.483910301 +0000 UTC m=+39.111753351 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 21:00:00 crc kubenswrapper[4886]: E0219 21:00:00.483974 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 21:00:00 crc kubenswrapper[4886]: E0219 21:00:00.484001 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 21:00:00 crc kubenswrapper[4886]: E0219 21:00:00.484020 4886 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 21:00:00 crc kubenswrapper[4886]: E0219 21:00:00.484147 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 21:00:08.484099865 +0000 UTC m=+39.111942945 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.521294 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.521350 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.521370 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.521394 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.521413 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:00Z","lastTransitionTime":"2026-02-19T21:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.562476 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 06:04:36.355755867 +0000 UTC Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.600943 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.600977 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.601404 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:00 crc kubenswrapper[4886]: E0219 21:00:00.601399 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:00 crc kubenswrapper[4886]: E0219 21:00:00.601506 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:00 crc kubenswrapper[4886]: E0219 21:00:00.601575 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.617428 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.623505 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.623565 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.623585 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.623608 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.623625 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:00Z","lastTransitionTime":"2026-02-19T21:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.630337 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.648612 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.662408 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.694145 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.718394 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.725579 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.725610 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.725618 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.725632 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.725640 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:00Z","lastTransitionTime":"2026-02-19T21:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.739437 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.756351 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.771815 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.790538 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.809679 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.828070 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.828137 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.828161 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.828191 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.828214 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:00Z","lastTransitionTime":"2026-02-19T21:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.829525 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.850795 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.866100 4886 generic.go:334] "Generic (PLEG): container finished" podID="b250eb86-03a2-41d3-b71b-2264cc0b285b" containerID="a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9" exitCode=0 Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.866166 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" event={"ID":"b250eb86-03a2-41d3-b71b-2264cc0b285b","Type":"ContainerDied","Data":"a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9"} Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.877508 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.898114 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.919204 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.931660 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.931725 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.931743 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.931770 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.931789 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:00Z","lastTransitionTime":"2026-02-19T21:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.942217 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.962203 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:00 crc kubenswrapper[4886]: I0219 21:00:00.985910 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.002205 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:01Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.020468 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:01Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.035543 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.035656 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.035680 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.035703 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.035719 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:01Z","lastTransitionTime":"2026-02-19T21:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.051905 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:01Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.071218 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:01Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.086214 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:01Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.105578 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:01Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.134153 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:01Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.139196 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.139248 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.139295 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.139328 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.139351 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:01Z","lastTransitionTime":"2026-02-19T21:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.158760 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:01Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.178511 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:01Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.241228 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.241310 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.241325 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.241342 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.241355 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:01Z","lastTransitionTime":"2026-02-19T21:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.343629 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.343687 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.343704 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.343728 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.343747 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:01Z","lastTransitionTime":"2026-02-19T21:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.446702 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.446760 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.446777 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.446801 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.446818 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:01Z","lastTransitionTime":"2026-02-19T21:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.501740 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.502877 4886 scope.go:117] "RemoveContainer" containerID="fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666" Feb 19 21:00:01 crc kubenswrapper[4886]: E0219 21:00:01.503200 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.550160 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.550216 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.550234 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.550259 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.550308 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:01Z","lastTransitionTime":"2026-02-19T21:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.562925 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 21:44:52.717897496 +0000 UTC Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.653408 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.653485 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.653504 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.653529 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.653547 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:01Z","lastTransitionTime":"2026-02-19T21:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.756631 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.756670 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.756680 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.756696 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.756708 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:01Z","lastTransitionTime":"2026-02-19T21:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.860092 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.860136 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.860152 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.860175 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.860191 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:01Z","lastTransitionTime":"2026-02-19T21:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.882433 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" event={"ID":"b250eb86-03a2-41d3-b71b-2264cc0b285b","Type":"ContainerStarted","Data":"8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc"} Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.890928 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerStarted","Data":"07a1b4f7dd79dc94346eeabf5d424d217ce2e756a5f8f64fe44c2cfc70169dad"} Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.891369 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.891459 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.904613 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:01Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.925073 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:01Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.931446 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.934880 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.948371 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:01Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.963771 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.963855 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.963880 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.963909 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.963930 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:01Z","lastTransitionTime":"2026-02-19T21:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.970789 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:01Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:01 crc kubenswrapper[4886]: I0219 21:00:01.990378 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:01Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.006723 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:02Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.017244 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:02Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.028376 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:02Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.056925 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:02Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.066089 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.066122 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.066131 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.066144 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.066153 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:02Z","lastTransitionTime":"2026-02-19T21:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.074584 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:02Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.093760 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:02Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.110348 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:02Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.128506 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:02Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.147351 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:02Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.162477 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:02Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.168758 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.168814 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.168831 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.168856 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.168929 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:02Z","lastTransitionTime":"2026-02-19T21:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.179788 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:02Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.193135 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:02Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.209853 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:02Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.221742 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:02Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.233317 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:02Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.249036 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:02Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.259740 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:02Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.276157 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.276221 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.276237 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.276282 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.276300 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:02Z","lastTransitionTime":"2026-02-19T21:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.279646 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:02Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.292995 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:02Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.312438 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:02Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.328864 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:02Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.342365 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:02Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.374057 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07a1b4f7dd79dc94346eeabf5d424d217ce2e756a5f8f64fe44c2cfc70169dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:02Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.379158 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.379232 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.379251 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.379303 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.379321 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:02Z","lastTransitionTime":"2026-02-19T21:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.481808 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.481870 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.481889 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.481930 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.481947 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:02Z","lastTransitionTime":"2026-02-19T21:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.563408 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 02:01:49.867549888 +0000 UTC Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.584958 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.585022 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.585048 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.585082 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.585107 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:02Z","lastTransitionTime":"2026-02-19T21:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.600560 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:02 crc kubenswrapper[4886]: E0219 21:00:02.600713 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.601043 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:02 crc kubenswrapper[4886]: E0219 21:00:02.601212 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.601068 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:02 crc kubenswrapper[4886]: E0219 21:00:02.601362 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.656423 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.687383 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.687449 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.687467 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.687490 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.687507 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:02Z","lastTransitionTime":"2026-02-19T21:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.791187 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.791306 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.791327 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.791349 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.791367 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:02Z","lastTransitionTime":"2026-02-19T21:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.893888 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.893932 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.893948 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.893970 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.893987 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:02Z","lastTransitionTime":"2026-02-19T21:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.997218 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.997310 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.997327 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.997352 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:02 crc kubenswrapper[4886]: I0219 21:00:02.997369 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:02Z","lastTransitionTime":"2026-02-19T21:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.100336 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.100384 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.100401 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.100424 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.100441 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:03Z","lastTransitionTime":"2026-02-19T21:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.203458 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.203522 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.203540 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.203565 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.203621 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:03Z","lastTransitionTime":"2026-02-19T21:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.305716 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.305770 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.305786 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.305808 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.305823 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:03Z","lastTransitionTime":"2026-02-19T21:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.408441 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.408482 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.408494 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.408513 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.408526 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:03Z","lastTransitionTime":"2026-02-19T21:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.511481 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.511532 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.511548 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.511572 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.511589 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:03Z","lastTransitionTime":"2026-02-19T21:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.564404 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 04:56:54.524984109 +0000 UTC Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.615081 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.615125 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.615143 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.615165 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.615184 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:03Z","lastTransitionTime":"2026-02-19T21:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.660167 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.660238 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.660258 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.660312 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.660335 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:03Z","lastTransitionTime":"2026-02-19T21:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:03 crc kubenswrapper[4886]: E0219 21:00:03.680625 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:03Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.686054 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.686104 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.686126 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.686154 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.686176 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:03Z","lastTransitionTime":"2026-02-19T21:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:03 crc kubenswrapper[4886]: E0219 21:00:03.710945 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:03Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.717615 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.717655 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.717664 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.717686 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.717698 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:03Z","lastTransitionTime":"2026-02-19T21:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:03 crc kubenswrapper[4886]: E0219 21:00:03.740373 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:03Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.750450 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.750518 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.750537 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.750562 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.750579 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:03Z","lastTransitionTime":"2026-02-19T21:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:03 crc kubenswrapper[4886]: E0219 21:00:03.768766 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:03Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.774997 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.775049 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.775065 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.775086 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.775102 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:03Z","lastTransitionTime":"2026-02-19T21:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:03 crc kubenswrapper[4886]: E0219 21:00:03.790245 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:03Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:03 crc kubenswrapper[4886]: E0219 21:00:03.790374 4886 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.792813 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.792856 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.792868 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.792881 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.792891 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:03Z","lastTransitionTime":"2026-02-19T21:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.895119 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.895170 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.895187 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.895214 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:03 crc kubenswrapper[4886]: I0219 21:00:03.895231 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:03Z","lastTransitionTime":"2026-02-19T21:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.015897 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.015950 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.015967 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.015992 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.016014 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:04Z","lastTransitionTime":"2026-02-19T21:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.119096 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.119146 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.119161 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.119180 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.119195 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:04Z","lastTransitionTime":"2026-02-19T21:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.221761 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.221824 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.221842 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.221865 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.221886 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:04Z","lastTransitionTime":"2026-02-19T21:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.324887 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.324929 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.324939 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.324956 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.324968 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:04Z","lastTransitionTime":"2026-02-19T21:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.427781 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.427836 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.427855 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.427880 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.427897 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:04Z","lastTransitionTime":"2026-02-19T21:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.530668 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.530731 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.530741 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.530759 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.530770 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:04Z","lastTransitionTime":"2026-02-19T21:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.565134 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 15:50:12.234794754 +0000 UTC Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.601100 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.601119 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.601303 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:04 crc kubenswrapper[4886]: E0219 21:00:04.601474 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:04 crc kubenswrapper[4886]: E0219 21:00:04.601641 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:04 crc kubenswrapper[4886]: E0219 21:00:04.601784 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.632969 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.633035 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.633053 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.633078 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.633095 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:04Z","lastTransitionTime":"2026-02-19T21:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.736192 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.736300 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.736329 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.736359 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.736381 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:04Z","lastTransitionTime":"2026-02-19T21:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.839918 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.839984 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.840001 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.840028 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.840045 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:04Z","lastTransitionTime":"2026-02-19T21:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.904760 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nclwh_87d8f125-379b-4e5a-bedc-b55cf9edb00a/ovnkube-controller/0.log" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.908356 4886 generic.go:334] "Generic (PLEG): container finished" podID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerID="07a1b4f7dd79dc94346eeabf5d424d217ce2e756a5f8f64fe44c2cfc70169dad" exitCode=1 Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.908412 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerDied","Data":"07a1b4f7dd79dc94346eeabf5d424d217ce2e756a5f8f64fe44c2cfc70169dad"} Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.909381 4886 scope.go:117] "RemoveContainer" containerID="07a1b4f7dd79dc94346eeabf5d424d217ce2e756a5f8f64fe44c2cfc70169dad" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.923882 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:04Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.942347 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:04Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.943519 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.943550 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.943560 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.943573 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.943584 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:04Z","lastTransitionTime":"2026-02-19T21:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.954658 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:04Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.984451 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07a1b4f7dd79dc94346eeabf5d424d217ce2e756a5f8f64fe44c2cfc70169dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07a1b4f7dd79dc94346eeabf5d424d217ce2e756a5f8f64fe44c2cfc70169dad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:04Z\\\",\\\"message\\\":\\\"21:00:04.444999 6214 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 21:00:04.445420 6214 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 21:00:04.445461 6214 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 21:00:04.445507 6214 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 21:00:04.445533 6214 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 21:00:04.446637 6214 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 21:00:04.446689 6214 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 21:00:04.446724 6214 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 21:00:04.446733 6214 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 21:00:04.446745 6214 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 21:00:04.446753 6214 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 21:00:04.446777 6214 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 21:00:04.446793 6214 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 21:00:04.446785 6214 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 21:00:04.446825 6214 factory.go:656] Stopping watch factory\\\\nI0219 21:00:04.446848 6214 ovnkube.go:599] Stopped ovnkube\\\\nI0219 21:00:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:04Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:04 crc kubenswrapper[4886]: I0219 21:00:04.998957 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:04Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.016755 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:05Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.033667 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:05Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.045731 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.045783 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.045812 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.045825 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.045833 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:05Z","lastTransitionTime":"2026-02-19T21:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.059226 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:05Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.079021 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:05Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.102455 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:05Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.119085 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:05Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.138671 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:05Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.148637 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.148686 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.148704 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.148728 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.148745 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:05Z","lastTransitionTime":"2026-02-19T21:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.157921 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:05Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.174675 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:05Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.251449 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.251510 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.251528 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.251553 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.251571 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:05Z","lastTransitionTime":"2026-02-19T21:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.354333 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.354365 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.354373 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.354386 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.354396 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:05Z","lastTransitionTime":"2026-02-19T21:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.457698 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.457747 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.457761 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.457779 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.457791 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:05Z","lastTransitionTime":"2026-02-19T21:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.560911 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.560954 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.560966 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.560984 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.560997 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:05Z","lastTransitionTime":"2026-02-19T21:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.565322 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 14:49:25.280642734 +0000 UTC Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.664404 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.664460 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.664477 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.664501 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.664519 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:05Z","lastTransitionTime":"2026-02-19T21:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.767903 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.767959 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.767976 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.768003 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.768019 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:05Z","lastTransitionTime":"2026-02-19T21:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.869929 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.869977 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.869991 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.870007 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.870018 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:05Z","lastTransitionTime":"2026-02-19T21:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.914436 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nclwh_87d8f125-379b-4e5a-bedc-b55cf9edb00a/ovnkube-controller/0.log" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.918422 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerStarted","Data":"4835e9806734a6def638ebcc9b54de3267f5fef491aaf75a8eda408466d1ff9b"} Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.919049 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.941431 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:05Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.956218 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:05Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.972240 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.972283 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.972293 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.972307 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.972324 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:05Z","lastTransitionTime":"2026-02-19T21:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:05 crc kubenswrapper[4886]: I0219 21:00:05.980626 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4835e9806734a6def638ebcc9b54de3267f5fef491aaf75a8eda408466d1ff9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07a1b4f7dd79dc94346eeabf5d424d217ce2e756a5f8f64fe44c2cfc70169dad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:04Z\\\",\\\"message\\\":\\\"21:00:04.444999 6214 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 21:00:04.445420 6214 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 21:00:04.445461 6214 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 21:00:04.445507 6214 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 21:00:04.445533 6214 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 21:00:04.446637 6214 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 21:00:04.446689 6214 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 21:00:04.446724 6214 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 21:00:04.446733 6214 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 21:00:04.446745 6214 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 21:00:04.446753 6214 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 21:00:04.446777 6214 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 21:00:04.446793 6214 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 21:00:04.446785 6214 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 21:00:04.446825 6214 factory.go:656] Stopping watch factory\\\\nI0219 21:00:04.446848 6214 ovnkube.go:599] Stopped ovnkube\\\\nI0219 21:00:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:05Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.002333 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:06Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.022234 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:06Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.039217 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:06Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.060869 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:06Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.074944 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.074995 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.075011 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.075034 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.075052 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:06Z","lastTransitionTime":"2026-02-19T21:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.075050 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:06Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.094184 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:06Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.111361 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:06Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.128997 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:06Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.148209 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:06Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.162747 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:06Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.178463 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.178515 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.178534 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.178557 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.178575 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:06Z","lastTransitionTime":"2026-02-19T21:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.180251 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:06Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.281850 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.281928 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.281947 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.281975 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.281994 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:06Z","lastTransitionTime":"2026-02-19T21:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.384572 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.384621 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.384632 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.384651 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.384663 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:06Z","lastTransitionTime":"2026-02-19T21:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.488028 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.488093 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.488112 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.488137 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.488155 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:06Z","lastTransitionTime":"2026-02-19T21:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.565795 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 15:45:48.359434049 +0000 UTC Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.591390 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.591432 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.591448 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.591470 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.591487 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:06Z","lastTransitionTime":"2026-02-19T21:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.600964 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.600987 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:06 crc kubenswrapper[4886]: E0219 21:00:06.601149 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:06 crc kubenswrapper[4886]: E0219 21:00:06.601468 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.601576 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:06 crc kubenswrapper[4886]: E0219 21:00:06.601909 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.695069 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.695140 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.695158 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.695183 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.695204 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:06Z","lastTransitionTime":"2026-02-19T21:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.798309 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.798373 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.798391 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.798418 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.798443 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:06Z","lastTransitionTime":"2026-02-19T21:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.901738 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.901783 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.901802 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.901824 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.901841 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:06Z","lastTransitionTime":"2026-02-19T21:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.925107 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nclwh_87d8f125-379b-4e5a-bedc-b55cf9edb00a/ovnkube-controller/1.log" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.926112 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nclwh_87d8f125-379b-4e5a-bedc-b55cf9edb00a/ovnkube-controller/0.log" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.930775 4886 generic.go:334] "Generic (PLEG): container finished" podID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerID="4835e9806734a6def638ebcc9b54de3267f5fef491aaf75a8eda408466d1ff9b" exitCode=1 Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.930879 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerDied","Data":"4835e9806734a6def638ebcc9b54de3267f5fef491aaf75a8eda408466d1ff9b"} Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.930943 4886 scope.go:117] "RemoveContainer" containerID="07a1b4f7dd79dc94346eeabf5d424d217ce2e756a5f8f64fe44c2cfc70169dad" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.933479 4886 scope.go:117] "RemoveContainer" containerID="4835e9806734a6def638ebcc9b54de3267f5fef491aaf75a8eda408466d1ff9b" Feb 19 21:00:06 crc kubenswrapper[4886]: E0219 21:00:06.933933 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" Feb 19 21:00:06 crc kubenswrapper[4886]: I0219 21:00:06.955168 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:06Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:06.998887 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:06Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.008622 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.008659 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.008670 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.008687 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.008700 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:07Z","lastTransitionTime":"2026-02-19T21:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.028851 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.058680 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4835e9806734a6def638ebcc9b54de3267f5fef491aaf75a8eda408466d1ff9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07a1b4f7dd79dc94346eeabf5d424d217ce2e756a5f8f64fe44c2cfc70169dad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:04Z\\\",\\\"message\\\":\\\"21:00:04.444999 6214 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 21:00:04.445420 6214 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 21:00:04.445461 6214 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 21:00:04.445507 6214 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 21:00:04.445533 6214 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 21:00:04.446637 6214 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 21:00:04.446689 6214 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 21:00:04.446724 6214 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 21:00:04.446733 6214 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 21:00:04.446745 6214 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 21:00:04.446753 6214 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 21:00:04.446777 6214 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 21:00:04.446793 6214 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 21:00:04.446785 6214 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 21:00:04.446825 6214 factory.go:656] Stopping watch factory\\\\nI0219 21:00:04.446848 6214 ovnkube.go:599] Stopped ovnkube\\\\nI0219 21:00:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4835e9806734a6def638ebcc9b54de3267f5fef491aaf75a8eda408466d1ff9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:06Z\\\",\\\"message\\\":\\\"r {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:05Z is after 2025-08-24T17:21:41Z]\\\\nI0219 21:00:06.001675 6346 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", Exte\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.075980 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.091051 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.105047 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.111086 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.111133 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.111148 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.111168 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.111183 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:07Z","lastTransitionTime":"2026-02-19T21:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.122522 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.143328 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.161779 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.175898 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.190387 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.209532 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.213298 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.213359 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.213379 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.213406 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.213424 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:07Z","lastTransitionTime":"2026-02-19T21:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.227576 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.316403 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.316479 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.316494 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.316513 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.316527 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:07Z","lastTransitionTime":"2026-02-19T21:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.362095 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg"] Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.362722 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.365346 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.366321 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.389377 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.407941 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.419961 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.420038 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.420064 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.420107 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.420132 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:07Z","lastTransitionTime":"2026-02-19T21:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.424147 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.439128 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.453420 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.459278 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0f72327b-1f93-42b9-be5e-0fe0aee6035f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-r47dg\" (UID: \"0f72327b-1f93-42b9-be5e-0fe0aee6035f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.459472 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0f72327b-1f93-42b9-be5e-0fe0aee6035f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-r47dg\" (UID: \"0f72327b-1f93-42b9-be5e-0fe0aee6035f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.459757 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqfnj\" (UniqueName: \"kubernetes.io/projected/0f72327b-1f93-42b9-be5e-0fe0aee6035f-kube-api-access-zqfnj\") pod \"ovnkube-control-plane-749d76644c-r47dg\" (UID: \"0f72327b-1f93-42b9-be5e-0fe0aee6035f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.459859 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0f72327b-1f93-42b9-be5e-0fe0aee6035f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-r47dg\" (UID: \"0f72327b-1f93-42b9-be5e-0fe0aee6035f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.468488 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.485255 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.499211 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.517347 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.523423 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.523490 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.523508 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.523533 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.523551 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:07Z","lastTransitionTime":"2026-02-19T21:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.550136 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4835e9806734a6def638ebcc9b54de3267f5fef491aaf75a8eda408466d1ff9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07a1b4f7dd79dc94346eeabf5d424d217ce2e756a5f8f64fe44c2cfc70169dad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:04Z\\\",\\\"message\\\":\\\"21:00:04.444999 6214 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 21:00:04.445420 6214 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 21:00:04.445461 6214 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 21:00:04.445507 6214 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 21:00:04.445533 6214 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 21:00:04.446637 6214 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 21:00:04.446689 6214 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 21:00:04.446724 6214 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 21:00:04.446733 6214 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 21:00:04.446745 6214 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 21:00:04.446753 6214 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 21:00:04.446777 6214 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 21:00:04.446793 6214 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 21:00:04.446785 6214 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 21:00:04.446825 6214 factory.go:656] Stopping watch factory\\\\nI0219 21:00:04.446848 6214 ovnkube.go:599] Stopped ovnkube\\\\nI0219 21:00:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4835e9806734a6def638ebcc9b54de3267f5fef491aaf75a8eda408466d1ff9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:06Z\\\",\\\"message\\\":\\\"r {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:05Z is after 2025-08-24T17:21:41Z]\\\\nI0219 21:00:06.001675 6346 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", Exte\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.561201 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0f72327b-1f93-42b9-be5e-0fe0aee6035f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-r47dg\" (UID: \"0f72327b-1f93-42b9-be5e-0fe0aee6035f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.561277 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqfnj\" (UniqueName: \"kubernetes.io/projected/0f72327b-1f93-42b9-be5e-0fe0aee6035f-kube-api-access-zqfnj\") pod \"ovnkube-control-plane-749d76644c-r47dg\" (UID: \"0f72327b-1f93-42b9-be5e-0fe0aee6035f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.561307 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0f72327b-1f93-42b9-be5e-0fe0aee6035f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-r47dg\" (UID: \"0f72327b-1f93-42b9-be5e-0fe0aee6035f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.561351 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0f72327b-1f93-42b9-be5e-0fe0aee6035f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-r47dg\" (UID: \"0f72327b-1f93-42b9-be5e-0fe0aee6035f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.562094 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0f72327b-1f93-42b9-be5e-0fe0aee6035f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-r47dg\" (UID: \"0f72327b-1f93-42b9-be5e-0fe0aee6035f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.562130 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0f72327b-1f93-42b9-be5e-0fe0aee6035f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-r47dg\" (UID: \"0f72327b-1f93-42b9-be5e-0fe0aee6035f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.566973 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 23:00:07.854043133 +0000 UTC Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.568171 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0f72327b-1f93-42b9-be5e-0fe0aee6035f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-r47dg\" (UID: \"0f72327b-1f93-42b9-be5e-0fe0aee6035f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.572567 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.583360 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqfnj\" (UniqueName: \"kubernetes.io/projected/0f72327b-1f93-42b9-be5e-0fe0aee6035f-kube-api-access-zqfnj\") pod \"ovnkube-control-plane-749d76644c-r47dg\" (UID: \"0f72327b-1f93-42b9-be5e-0fe0aee6035f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.588562 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f72327b-1f93-42b9-be5e-0fe0aee6035f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r47dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.607425 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.619921 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.626296 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.626339 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.626356 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.626410 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.626437 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:07Z","lastTransitionTime":"2026-02-19T21:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.633172 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.682843 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" Feb 19 21:00:07 crc kubenswrapper[4886]: W0219 21:00:07.697580 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f72327b_1f93_42b9_be5e_0fe0aee6035f.slice/crio-6538eb8d9eb0582855c8cfd16da5408d8d54bed7dce344551f9cff763485c0fc WatchSource:0}: Error finding container 6538eb8d9eb0582855c8cfd16da5408d8d54bed7dce344551f9cff763485c0fc: Status 404 returned error can't find the container with id 6538eb8d9eb0582855c8cfd16da5408d8d54bed7dce344551f9cff763485c0fc Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.728743 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.728778 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.728789 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.728805 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.728816 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:07Z","lastTransitionTime":"2026-02-19T21:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.831643 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.831669 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.831678 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.831691 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.831700 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:07Z","lastTransitionTime":"2026-02-19T21:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.933635 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.933696 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.933718 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.933749 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.933771 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:07Z","lastTransitionTime":"2026-02-19T21:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.936757 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nclwh_87d8f125-379b-4e5a-bedc-b55cf9edb00a/ovnkube-controller/1.log" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.941771 4886 scope.go:117] "RemoveContainer" containerID="4835e9806734a6def638ebcc9b54de3267f5fef491aaf75a8eda408466d1ff9b" Feb 19 21:00:07 crc kubenswrapper[4886]: E0219 21:00:07.942097 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.944843 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" event={"ID":"0f72327b-1f93-42b9-be5e-0fe0aee6035f","Type":"ContainerStarted","Data":"6538eb8d9eb0582855c8cfd16da5408d8d54bed7dce344551f9cff763485c0fc"} Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.965119 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:07 crc kubenswrapper[4886]: I0219 21:00:07.989713 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:07Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.017293 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.036368 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f72327b-1f93-42b9-be5e-0fe0aee6035f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r47dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.036432 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.036587 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.036600 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.036617 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.036629 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:08Z","lastTransitionTime":"2026-02-19T21:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.060567 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.080608 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.101822 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.121297 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.139610 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.139668 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.139686 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.139710 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.139727 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:08Z","lastTransitionTime":"2026-02-19T21:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.143005 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.162307 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.184365 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.203404 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.221066 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.242021 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.242069 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.242084 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.242103 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.242116 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:08Z","lastTransitionTime":"2026-02-19T21:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.245885 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4835e9806734a6def638ebcc9b54de3267f5fef491aaf75a8eda408466d1ff9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4835e9806734a6def638ebcc9b54de3267f5fef491aaf75a8eda408466d1ff9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:06Z\\\",\\\"message\\\":\\\"r {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:05Z is after 2025-08-24T17:21:41Z]\\\\nI0219 21:00:06.001675 6346 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", Exte\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.259196 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.345074 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.345131 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.345149 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.345174 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.345192 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:08Z","lastTransitionTime":"2026-02-19T21:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.448062 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.448111 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.448123 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.448141 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.448154 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:08Z","lastTransitionTime":"2026-02-19T21:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.468190 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:00:08 crc kubenswrapper[4886]: E0219 21:00:08.468425 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:00:24.468389586 +0000 UTC m=+55.096232696 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.468527 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.468634 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:08 crc kubenswrapper[4886]: E0219 21:00:08.468731 4886 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 21:00:08 crc kubenswrapper[4886]: E0219 21:00:08.468818 4886 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 21:00:08 crc kubenswrapper[4886]: E0219 21:00:08.468863 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 21:00:24.468839337 +0000 UTC m=+55.096682417 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 21:00:08 crc kubenswrapper[4886]: E0219 21:00:08.468903 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 21:00:24.468880638 +0000 UTC m=+55.096723698 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.531819 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-6hp27"] Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.543951 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:08 crc kubenswrapper[4886]: E0219 21:00:08.544055 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.551371 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.551421 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.551452 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.551471 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.551484 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:08Z","lastTransitionTime":"2026-02-19T21:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.565937 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.568096 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 01:45:41.678518315 +0000 UTC Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.569984 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.570089 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs\") pod \"network-metrics-daemon-6hp27\" (UID: \"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\") " pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:08 crc kubenswrapper[4886]: E0219 21:00:08.570192 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 21:00:08 crc kubenswrapper[4886]: E0219 21:00:08.570235 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 21:00:08 crc kubenswrapper[4886]: E0219 21:00:08.570320 4886 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.570194 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:08 crc kubenswrapper[4886]: E0219 21:00:08.570354 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 21:00:08 crc kubenswrapper[4886]: E0219 21:00:08.570463 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 21:00:08 crc kubenswrapper[4886]: E0219 21:00:08.570484 4886 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 21:00:08 crc kubenswrapper[4886]: E0219 21:00:08.570406 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 21:00:24.570380923 +0000 UTC m=+55.198224043 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.570592 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-922ll\" (UniqueName: \"kubernetes.io/projected/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-kube-api-access-922ll\") pod \"network-metrics-daemon-6hp27\" (UID: \"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\") " pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:08 crc kubenswrapper[4886]: E0219 21:00:08.570639 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 21:00:24.570612879 +0000 UTC m=+55.198455969 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.586344 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.600840 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.600882 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.600845 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:08 crc kubenswrapper[4886]: E0219 21:00:08.601001 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:08 crc kubenswrapper[4886]: E0219 21:00:08.601341 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:08 crc kubenswrapper[4886]: E0219 21:00:08.601210 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.604686 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.623040 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.641859 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.654257 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.654362 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.654382 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.654405 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.654421 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:08Z","lastTransitionTime":"2026-02-19T21:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.666227 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.671598 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-922ll\" (UniqueName: \"kubernetes.io/projected/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-kube-api-access-922ll\") pod \"network-metrics-daemon-6hp27\" (UID: \"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\") " pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.671640 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs\") pod \"network-metrics-daemon-6hp27\" (UID: \"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\") " pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:08 crc kubenswrapper[4886]: E0219 21:00:08.671765 4886 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 21:00:08 crc kubenswrapper[4886]: E0219 21:00:08.671810 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs podName:1160fb8a-b59d-4b7b-8632-d2b2ead9bb36 nodeName:}" failed. No retries permitted until 2026-02-19 21:00:09.171794966 +0000 UTC m=+39.799638026 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs") pod "network-metrics-daemon-6hp27" (UID: "1160fb8a-b59d-4b7b-8632-d2b2ead9bb36") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.689822 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.706429 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6hp27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6hp27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.707146 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-922ll\" (UniqueName: \"kubernetes.io/projected/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-kube-api-access-922ll\") pod \"network-metrics-daemon-6hp27\" (UID: \"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\") " pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.726882 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.744743 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.757555 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.757945 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.758147 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.758381 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.758552 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:08Z","lastTransitionTime":"2026-02-19T21:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.776613 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4835e9806734a6def638ebcc9b54de3267f5fef491aaf75a8eda408466d1ff9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4835e9806734a6def638ebcc9b54de3267f5fef491aaf75a8eda408466d1ff9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:06Z\\\",\\\"message\\\":\\\"r {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:05Z is after 2025-08-24T17:21:41Z]\\\\nI0219 21:00:06.001675 6346 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", Exte\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.803087 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.825872 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.848988 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.863675 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.863728 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.863742 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.863762 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.863779 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:08Z","lastTransitionTime":"2026-02-19T21:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.866770 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.883179 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f72327b-1f93-42b9-be5e-0fe0aee6035f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r47dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.957816 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" event={"ID":"0f72327b-1f93-42b9-be5e-0fe0aee6035f","Type":"ContainerStarted","Data":"7b9d8290bf97adfd00491ce8c604b2f3d964f62905337468cbf1ecec35dd961b"} Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.957872 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" event={"ID":"0f72327b-1f93-42b9-be5e-0fe0aee6035f","Type":"ContainerStarted","Data":"8842619bd40d1bf4d78e8a32e327199ff5c0e380c87417ee0d862bbdd459a683"} Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.966374 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.966429 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.966447 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.966469 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.966489 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:08Z","lastTransitionTime":"2026-02-19T21:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.974845 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6hp27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6hp27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:08 crc kubenswrapper[4886]: I0219 21:00:08.993072 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:08Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.011155 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:09Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.041955 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4835e9806734a6def638ebcc9b54de3267f5fef491aaf75a8eda408466d1ff9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4835e9806734a6def638ebcc9b54de3267f5fef491aaf75a8eda408466d1ff9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:06Z\\\",\\\"message\\\":\\\"r {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:05Z is after 2025-08-24T17:21:41Z]\\\\nI0219 21:00:06.001675 6346 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", Exte\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:09Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.061953 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:09Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.069680 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.069957 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.069987 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.070019 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.070040 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:09Z","lastTransitionTime":"2026-02-19T21:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.081164 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:09Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.103325 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:09Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.127417 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:09Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.145240 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f72327b-1f93-42b9-be5e-0fe0aee6035f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d8290bf97adfd00491ce8c604b2f3d964f62905337468cbf1ecec35dd961b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8842619bd40d1bf4d78e8a32e327199ff5c0e380c87417ee0d862bbdd459a683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r47dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:09Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.162823 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:09Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.172841 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.172882 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.172891 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.172906 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.172918 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:09Z","lastTransitionTime":"2026-02-19T21:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.175915 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs\") pod \"network-metrics-daemon-6hp27\" (UID: \"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\") " pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:09 crc kubenswrapper[4886]: E0219 21:00:09.176071 4886 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 21:00:09 crc kubenswrapper[4886]: E0219 21:00:09.176241 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs podName:1160fb8a-b59d-4b7b-8632-d2b2ead9bb36 nodeName:}" failed. No retries permitted until 2026-02-19 21:00:10.176166074 +0000 UTC m=+40.804009134 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs") pod "network-metrics-daemon-6hp27" (UID: "1160fb8a-b59d-4b7b-8632-d2b2ead9bb36") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.180188 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:09Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.197899 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:09Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.216690 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:09Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.231479 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:09Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.244709 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:09Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.258796 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:09Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.275841 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.275879 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.275887 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.275903 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.275912 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:09Z","lastTransitionTime":"2026-02-19T21:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.424691 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.424756 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.424775 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.424804 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.424827 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:09Z","lastTransitionTime":"2026-02-19T21:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.528059 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.528124 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.528139 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.528160 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.528175 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:09Z","lastTransitionTime":"2026-02-19T21:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.568692 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 12:35:34.327652365 +0000 UTC Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.630656 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.630690 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.630704 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.630723 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.630734 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:09Z","lastTransitionTime":"2026-02-19T21:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.733730 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.733774 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.733792 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.733815 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.733833 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:09Z","lastTransitionTime":"2026-02-19T21:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.836701 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.836742 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.836754 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.836769 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.836780 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:09Z","lastTransitionTime":"2026-02-19T21:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.939811 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.939861 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.939876 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.939897 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:09 crc kubenswrapper[4886]: I0219 21:00:09.939912 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:09Z","lastTransitionTime":"2026-02-19T21:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.042609 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.042685 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.042699 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.042729 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.042749 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:10Z","lastTransitionTime":"2026-02-19T21:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.146808 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.146883 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.146894 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.146914 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.146927 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:10Z","lastTransitionTime":"2026-02-19T21:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.185534 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs\") pod \"network-metrics-daemon-6hp27\" (UID: \"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\") " pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:10 crc kubenswrapper[4886]: E0219 21:00:10.185676 4886 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 21:00:10 crc kubenswrapper[4886]: E0219 21:00:10.185751 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs podName:1160fb8a-b59d-4b7b-8632-d2b2ead9bb36 nodeName:}" failed. No retries permitted until 2026-02-19 21:00:12.18573237 +0000 UTC m=+42.813575420 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs") pod "network-metrics-daemon-6hp27" (UID: "1160fb8a-b59d-4b7b-8632-d2b2ead9bb36") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.249316 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.249392 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.249412 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.249440 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.249459 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:10Z","lastTransitionTime":"2026-02-19T21:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.353032 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.353083 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.353100 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.353124 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.353141 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:10Z","lastTransitionTime":"2026-02-19T21:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.456030 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.456132 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.456161 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.456192 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.456214 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:10Z","lastTransitionTime":"2026-02-19T21:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.560134 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.560187 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.560197 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.560214 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.560230 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:10Z","lastTransitionTime":"2026-02-19T21:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.569762 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 22:26:10.909910243 +0000 UTC Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.600775 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.600864 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:10 crc kubenswrapper[4886]: E0219 21:00:10.600993 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.601013 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.601038 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:10 crc kubenswrapper[4886]: E0219 21:00:10.601197 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:10 crc kubenswrapper[4886]: E0219 21:00:10.601390 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:10 crc kubenswrapper[4886]: E0219 21:00:10.601591 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.623545 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.638429 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.662986 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.663052 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.663070 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.663098 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.663116 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:10Z","lastTransitionTime":"2026-02-19T21:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.668816 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4835e9806734a6def638ebcc9b54de3267f5fef491aaf75a8eda408466d1ff9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4835e9806734a6def638ebcc9b54de3267f5fef491aaf75a8eda408466d1ff9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:06Z\\\",\\\"message\\\":\\\"r {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:05Z is after 2025-08-24T17:21:41Z]\\\\nI0219 21:00:06.001675 6346 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", Exte\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.685183 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f72327b-1f93-42b9-be5e-0fe0aee6035f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d8290bf97adfd00491ce8c604b2f3d964f62905337468cbf1ecec35dd961b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8842619bd40d1bf4d78e8a32e327199ff5c0e380c87417ee0d862bbdd459a683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r47dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.709244 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.730237 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.749895 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.766617 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.766699 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.766721 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.766750 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.766774 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:10Z","lastTransitionTime":"2026-02-19T21:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.775761 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.798826 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.818789 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.838687 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.857401 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.869325 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.869393 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.869414 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.869446 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.869469 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:10Z","lastTransitionTime":"2026-02-19T21:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.881515 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.898812 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.916826 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.931986 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6hp27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6hp27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.971902 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.971946 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.971963 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.971985 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:10 crc kubenswrapper[4886]: I0219 21:00:10.972005 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:10Z","lastTransitionTime":"2026-02-19T21:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.076131 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.076226 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.076247 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.076315 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.076366 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:11Z","lastTransitionTime":"2026-02-19T21:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.179021 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.179126 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.179155 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.179186 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.179208 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:11Z","lastTransitionTime":"2026-02-19T21:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.282148 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.282206 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.282222 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.282245 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.282295 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:11Z","lastTransitionTime":"2026-02-19T21:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.385477 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.385559 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.385596 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.385635 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.385658 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:11Z","lastTransitionTime":"2026-02-19T21:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.488647 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.488709 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.488726 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.488750 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.488769 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:11Z","lastTransitionTime":"2026-02-19T21:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.570861 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 18:02:39.059643191 +0000 UTC Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.591215 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.591303 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.591322 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.591345 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.591362 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:11Z","lastTransitionTime":"2026-02-19T21:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.694523 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.694657 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.694677 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.694702 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.694720 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:11Z","lastTransitionTime":"2026-02-19T21:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.796826 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.796881 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.796896 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.796920 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.796936 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:11Z","lastTransitionTime":"2026-02-19T21:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.899845 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.899905 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.899921 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.899945 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:11 crc kubenswrapper[4886]: I0219 21:00:11.899961 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:11Z","lastTransitionTime":"2026-02-19T21:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.002252 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.002327 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.002342 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.002361 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.002376 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:12Z","lastTransitionTime":"2026-02-19T21:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.105063 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.105118 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.105135 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.105158 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.105174 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:12Z","lastTransitionTime":"2026-02-19T21:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.209199 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.209299 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.209317 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.209342 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.209358 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:12Z","lastTransitionTime":"2026-02-19T21:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.210976 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs\") pod \"network-metrics-daemon-6hp27\" (UID: \"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\") " pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:12 crc kubenswrapper[4886]: E0219 21:00:12.211185 4886 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 21:00:12 crc kubenswrapper[4886]: E0219 21:00:12.211323 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs podName:1160fb8a-b59d-4b7b-8632-d2b2ead9bb36 nodeName:}" failed. No retries permitted until 2026-02-19 21:00:16.211249146 +0000 UTC m=+46.839092236 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs") pod "network-metrics-daemon-6hp27" (UID: "1160fb8a-b59d-4b7b-8632-d2b2ead9bb36") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.312638 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.312701 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.312723 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.312750 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.312773 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:12Z","lastTransitionTime":"2026-02-19T21:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.415846 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.415896 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.415912 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.415933 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.415952 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:12Z","lastTransitionTime":"2026-02-19T21:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.519717 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.519799 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.519820 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.519849 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.519869 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:12Z","lastTransitionTime":"2026-02-19T21:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.571344 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 03:19:55.16926972 +0000 UTC Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.600949 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.601081 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:12 crc kubenswrapper[4886]: E0219 21:00:12.601126 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.601085 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.601184 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:12 crc kubenswrapper[4886]: E0219 21:00:12.601363 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:12 crc kubenswrapper[4886]: E0219 21:00:12.601496 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:12 crc kubenswrapper[4886]: E0219 21:00:12.601637 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.622533 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.622577 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.622587 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.622606 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.622617 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:12Z","lastTransitionTime":"2026-02-19T21:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.724678 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.724736 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.724748 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.724767 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.724778 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:12Z","lastTransitionTime":"2026-02-19T21:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.827745 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.827797 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.827812 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.827831 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.827843 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:12Z","lastTransitionTime":"2026-02-19T21:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.931125 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.931186 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.931203 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.931227 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:12 crc kubenswrapper[4886]: I0219 21:00:12.931245 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:12Z","lastTransitionTime":"2026-02-19T21:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.033322 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.033375 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.033391 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.033414 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.033436 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:13Z","lastTransitionTime":"2026-02-19T21:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.136695 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.136737 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.136747 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.136763 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.136774 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:13Z","lastTransitionTime":"2026-02-19T21:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.248881 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.248942 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.248965 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.248998 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.249019 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:13Z","lastTransitionTime":"2026-02-19T21:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.351912 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.351977 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.352000 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.352029 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.352050 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:13Z","lastTransitionTime":"2026-02-19T21:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.455029 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.455091 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.455110 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.455134 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.455151 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:13Z","lastTransitionTime":"2026-02-19T21:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.557371 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.557450 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.557475 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.557507 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.557527 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:13Z","lastTransitionTime":"2026-02-19T21:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.572159 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 10:13:42.651978101 +0000 UTC Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.601857 4886 scope.go:117] "RemoveContainer" containerID="fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.660499 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.660560 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.660582 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.660611 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.660634 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:13Z","lastTransitionTime":"2026-02-19T21:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.763701 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.764094 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.764316 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.764543 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.764730 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:13Z","lastTransitionTime":"2026-02-19T21:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.867818 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.867868 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.867886 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.867904 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.867916 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:13Z","lastTransitionTime":"2026-02-19T21:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.947383 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.947440 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.947456 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.947485 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.947503 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:13Z","lastTransitionTime":"2026-02-19T21:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:13 crc kubenswrapper[4886]: E0219 21:00:13.964644 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:13Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.970324 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.970368 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.970379 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.970394 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.970405 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:13Z","lastTransitionTime":"2026-02-19T21:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:13 crc kubenswrapper[4886]: E0219 21:00:13.992283 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:13Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.994897 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.997513 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.997574 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.997594 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.997623 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:13 crc kubenswrapper[4886]: I0219 21:00:13.997643 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:13Z","lastTransitionTime":"2026-02-19T21:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.001011 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196"} Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.001530 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 21:00:14 crc kubenswrapper[4886]: E0219 21:00:14.015189 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:14Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.020995 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.021058 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.021077 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.021102 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.021124 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:14Z","lastTransitionTime":"2026-02-19T21:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.035978 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4835e9806734a6def638ebcc9b54de3267f5fef491aaf75a8eda408466d1ff9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4835e9806734a6def638ebcc9b54de3267f5fef491aaf75a8eda408466d1ff9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:06Z\\\",\\\"message\\\":\\\"r {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:05Z is after 2025-08-24T17:21:41Z]\\\\nI0219 21:00:06.001675 6346 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", Exte\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:14Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:14 crc kubenswrapper[4886]: E0219 21:00:14.039809 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:14Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.046575 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.046626 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.046645 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.046669 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.046687 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:14Z","lastTransitionTime":"2026-02-19T21:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.067861 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:14Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:14 crc kubenswrapper[4886]: E0219 21:00:14.068143 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:14Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:14 crc kubenswrapper[4886]: E0219 21:00:14.068307 4886 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.073389 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.073409 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.073420 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.073434 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.073446 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:14Z","lastTransitionTime":"2026-02-19T21:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.087603 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:14Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.114812 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:14Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.139535 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:14Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.151352 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f72327b-1f93-42b9-be5e-0fe0aee6035f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d8290bf97adfd00491ce8c604b2f3d964f62905337468cbf1ecec35dd961b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8842619bd40d1bf4d78e8a32e327199ff5c0e380c87417ee0d862bbdd459a683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r47dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:14Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.164904 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:14Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.175716 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.175747 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.175759 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.175777 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.175788 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:14Z","lastTransitionTime":"2026-02-19T21:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.180845 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:14Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.194022 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:14Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.208895 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:14Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.226100 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:14Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.243714 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:14Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.258365 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:14Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.271142 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6hp27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6hp27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:14Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.278101 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.278140 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.278151 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.278166 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.278176 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:14Z","lastTransitionTime":"2026-02-19T21:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.282472 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:14Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.299350 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:14Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.380681 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.380720 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.380728 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.380742 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.380753 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:14Z","lastTransitionTime":"2026-02-19T21:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.482784 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.482811 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.482819 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.482831 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.482839 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:14Z","lastTransitionTime":"2026-02-19T21:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.572715 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 17:51:25.721969861 +0000 UTC Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.585809 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.585856 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.585868 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.585885 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.585897 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:14Z","lastTransitionTime":"2026-02-19T21:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.599997 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:14 crc kubenswrapper[4886]: E0219 21:00:14.600093 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.600272 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.600347 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.600272 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:14 crc kubenswrapper[4886]: E0219 21:00:14.600412 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:14 crc kubenswrapper[4886]: E0219 21:00:14.600574 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:14 crc kubenswrapper[4886]: E0219 21:00:14.600715 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.689040 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.689075 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.689086 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.689102 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.689113 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:14Z","lastTransitionTime":"2026-02-19T21:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.791874 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.791926 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.791943 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.791967 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.791985 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:14Z","lastTransitionTime":"2026-02-19T21:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.894708 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.894748 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.894759 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.894777 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.894790 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:14Z","lastTransitionTime":"2026-02-19T21:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.997838 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.997888 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.997905 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.997928 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:14 crc kubenswrapper[4886]: I0219 21:00:14.997947 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:14Z","lastTransitionTime":"2026-02-19T21:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.100834 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.100896 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.100919 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.100948 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.100970 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:15Z","lastTransitionTime":"2026-02-19T21:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.204352 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.204425 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.204448 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.204478 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.204500 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:15Z","lastTransitionTime":"2026-02-19T21:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.307013 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.307085 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.307109 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.307138 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.307160 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:15Z","lastTransitionTime":"2026-02-19T21:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.434844 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.434948 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.434972 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.435005 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.435030 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:15Z","lastTransitionTime":"2026-02-19T21:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.538013 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.538071 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.538087 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.538112 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.538131 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:15Z","lastTransitionTime":"2026-02-19T21:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.573234 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 23:14:32.910797844 +0000 UTC Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.641472 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.641543 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.641562 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.641586 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.641606 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:15Z","lastTransitionTime":"2026-02-19T21:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.745040 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.745110 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.745133 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.745162 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.745179 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:15Z","lastTransitionTime":"2026-02-19T21:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.848180 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.848236 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.848259 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.848457 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.848480 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:15Z","lastTransitionTime":"2026-02-19T21:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.951718 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.951790 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.951853 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.951884 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:15 crc kubenswrapper[4886]: I0219 21:00:15.951911 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:15Z","lastTransitionTime":"2026-02-19T21:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.054732 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.054841 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.054894 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.054917 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.054934 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:16Z","lastTransitionTime":"2026-02-19T21:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.157089 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.157174 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.157227 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.157255 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.157302 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:16Z","lastTransitionTime":"2026-02-19T21:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.260120 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.260188 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.260210 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.260306 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.260337 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:16Z","lastTransitionTime":"2026-02-19T21:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.291095 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs\") pod \"network-metrics-daemon-6hp27\" (UID: \"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\") " pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:16 crc kubenswrapper[4886]: E0219 21:00:16.291321 4886 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 21:00:16 crc kubenswrapper[4886]: E0219 21:00:16.291440 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs podName:1160fb8a-b59d-4b7b-8632-d2b2ead9bb36 nodeName:}" failed. No retries permitted until 2026-02-19 21:00:24.291408518 +0000 UTC m=+54.919251608 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs") pod "network-metrics-daemon-6hp27" (UID: "1160fb8a-b59d-4b7b-8632-d2b2ead9bb36") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.363041 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.363110 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.363128 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.363154 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.363172 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:16Z","lastTransitionTime":"2026-02-19T21:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.466365 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.466417 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.466428 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.466443 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.466456 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:16Z","lastTransitionTime":"2026-02-19T21:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.568658 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.568716 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.568734 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.568758 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.568776 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:16Z","lastTransitionTime":"2026-02-19T21:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.574135 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 01:40:31.515144373 +0000 UTC Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.600743 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.600829 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.600830 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.600989 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:16 crc kubenswrapper[4886]: E0219 21:00:16.600976 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:16 crc kubenswrapper[4886]: E0219 21:00:16.601098 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:16 crc kubenswrapper[4886]: E0219 21:00:16.601246 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:16 crc kubenswrapper[4886]: E0219 21:00:16.601452 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.670745 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.670813 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.670832 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.670857 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.670877 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:16Z","lastTransitionTime":"2026-02-19T21:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.773413 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.773454 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.773467 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.773486 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.773500 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:16Z","lastTransitionTime":"2026-02-19T21:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.877167 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.877224 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.877242 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.877301 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.877320 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:16Z","lastTransitionTime":"2026-02-19T21:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.979913 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.979964 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.979981 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.980003 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:16 crc kubenswrapper[4886]: I0219 21:00:16.980019 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:16Z","lastTransitionTime":"2026-02-19T21:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.082548 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.082608 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.082624 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.082648 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.082667 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:17Z","lastTransitionTime":"2026-02-19T21:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.190687 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.190762 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.190784 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.190813 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.190838 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:17Z","lastTransitionTime":"2026-02-19T21:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.294462 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.294567 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.294585 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.294609 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.294627 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:17Z","lastTransitionTime":"2026-02-19T21:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.398742 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.398805 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.398822 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.398847 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.398866 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:17Z","lastTransitionTime":"2026-02-19T21:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.502607 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.502684 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.502701 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.502727 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.502745 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:17Z","lastTransitionTime":"2026-02-19T21:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.574461 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 00:14:09.780979707 +0000 UTC Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.605371 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.605442 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.605459 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.605480 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.605497 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:17Z","lastTransitionTime":"2026-02-19T21:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.708764 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.708844 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.708869 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.708901 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.708924 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:17Z","lastTransitionTime":"2026-02-19T21:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.811791 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.811849 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.811868 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.811893 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.811911 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:17Z","lastTransitionTime":"2026-02-19T21:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.914697 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.914740 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.914752 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.914771 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:17 crc kubenswrapper[4886]: I0219 21:00:17.914783 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:17Z","lastTransitionTime":"2026-02-19T21:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.017052 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.017108 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.017130 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.017158 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.017180 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:18Z","lastTransitionTime":"2026-02-19T21:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.120502 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.120572 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.120593 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.120621 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.120644 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:18Z","lastTransitionTime":"2026-02-19T21:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.223494 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.223551 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.223610 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.223642 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.223667 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:18Z","lastTransitionTime":"2026-02-19T21:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.325905 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.325971 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.325983 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.326025 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.326041 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:18Z","lastTransitionTime":"2026-02-19T21:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.429457 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.429520 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.429538 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.429567 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.429585 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:18Z","lastTransitionTime":"2026-02-19T21:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.532849 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.532984 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.533015 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.533048 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.533070 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:18Z","lastTransitionTime":"2026-02-19T21:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.574640 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 16:10:47.286557645 +0000 UTC Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.600555 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.600730 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.600623 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.600555 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:18 crc kubenswrapper[4886]: E0219 21:00:18.600857 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:18 crc kubenswrapper[4886]: E0219 21:00:18.600987 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:18 crc kubenswrapper[4886]: E0219 21:00:18.601086 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:18 crc kubenswrapper[4886]: E0219 21:00:18.601209 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.636364 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.636415 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.636430 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.636449 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.636464 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:18Z","lastTransitionTime":"2026-02-19T21:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.738466 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.738498 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.738506 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.738518 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.738527 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:18Z","lastTransitionTime":"2026-02-19T21:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.840837 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.840925 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.840942 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.840964 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.840980 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:18Z","lastTransitionTime":"2026-02-19T21:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.943445 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.943508 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.943531 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.943565 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:18 crc kubenswrapper[4886]: I0219 21:00:18.943589 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:18Z","lastTransitionTime":"2026-02-19T21:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.046563 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.046629 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.046654 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.046684 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.046710 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:19Z","lastTransitionTime":"2026-02-19T21:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.149478 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.149533 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.149550 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.149573 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.149591 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:19Z","lastTransitionTime":"2026-02-19T21:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.253008 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.253076 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.253093 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.253117 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.253135 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:19Z","lastTransitionTime":"2026-02-19T21:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.356618 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.356681 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.356701 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.356733 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.356755 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:19Z","lastTransitionTime":"2026-02-19T21:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.460377 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.460440 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.460462 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.460491 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.460513 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:19Z","lastTransitionTime":"2026-02-19T21:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.563080 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.563128 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.563144 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.563168 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.563185 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:19Z","lastTransitionTime":"2026-02-19T21:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.575033 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 17:47:23.266114148 +0000 UTC Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.601477 4886 scope.go:117] "RemoveContainer" containerID="4835e9806734a6def638ebcc9b54de3267f5fef491aaf75a8eda408466d1ff9b" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.667372 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.667780 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.667807 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.667839 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.667861 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:19Z","lastTransitionTime":"2026-02-19T21:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.770016 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.770060 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.770075 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.770096 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.770111 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:19Z","lastTransitionTime":"2026-02-19T21:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.872655 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.872722 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.872739 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.872764 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.872782 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:19Z","lastTransitionTime":"2026-02-19T21:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.975473 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.975540 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.975559 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.975583 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:19 crc kubenswrapper[4886]: I0219 21:00:19.975600 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:19Z","lastTransitionTime":"2026-02-19T21:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.023661 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nclwh_87d8f125-379b-4e5a-bedc-b55cf9edb00a/ovnkube-controller/1.log" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.027499 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerStarted","Data":"c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e"} Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.028734 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.050021 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.070202 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.078446 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.078492 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.078504 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.078520 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.078531 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:20Z","lastTransitionTime":"2026-02-19T21:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.087730 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.106313 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.138425 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.155917 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.171221 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.180950 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.180996 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.181006 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.181019 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.181028 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:20Z","lastTransitionTime":"2026-02-19T21:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.185347 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6hp27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6hp27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.198954 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.211022 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.230848 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4835e9806734a6def638ebcc9b54de3267f5fef491aaf75a8eda408466d1ff9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:06Z\\\",\\\"message\\\":\\\"r {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:05Z is after 2025-08-24T17:21:41Z]\\\\nI0219 21:00:06.001675 6346 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", Exte\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.243813 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.257534 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.271110 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.283423 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.283474 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.283487 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.283505 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.283517 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:20Z","lastTransitionTime":"2026-02-19T21:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.291460 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.304680 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f72327b-1f93-42b9-be5e-0fe0aee6035f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d8290bf97adfd00491ce8c604b2f3d964f62905337468cbf1ecec35dd961b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8842619bd40d1bf4d78e8a32e327199ff5c0e380c87417ee0d862bbdd459a683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r47dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.385710 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.385750 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.385758 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.385771 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.385779 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:20Z","lastTransitionTime":"2026-02-19T21:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.488177 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.488235 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.488249 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.488296 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.488314 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:20Z","lastTransitionTime":"2026-02-19T21:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.575799 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 21:52:48.062518681 +0000 UTC Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.611290 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.611367 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.611362 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:20 crc kubenswrapper[4886]: E0219 21:00:20.611434 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.611465 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.611524 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:20 crc kubenswrapper[4886]: E0219 21:00:20.611561 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:20 crc kubenswrapper[4886]: E0219 21:00:20.611656 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:20 crc kubenswrapper[4886]: E0219 21:00:20.611725 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.611743 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.611757 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.611776 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.611827 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:20Z","lastTransitionTime":"2026-02-19T21:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.628880 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.645571 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.661334 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.678726 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.696426 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f72327b-1f93-42b9-be5e-0fe0aee6035f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d8290bf97adfd00491ce8c604b2f3d964f62905337468cbf1ecec35dd961b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8842619bd40d1bf4d78e8a32e327199ff5c0e380c87417ee0d862bbdd459a683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r47dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.717070 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.717138 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.717162 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.717193 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.717221 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:20Z","lastTransitionTime":"2026-02-19T21:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.722424 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.743475 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.760070 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.774963 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.792895 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.805666 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.820827 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.820906 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.820924 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.821344 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.821633 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:20Z","lastTransitionTime":"2026-02-19T21:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.823047 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.838731 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6hp27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6hp27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.856165 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.870939 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.900087 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4835e9806734a6def638ebcc9b54de3267f5fef491aaf75a8eda408466d1ff9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:06Z\\\",\\\"message\\\":\\\"r {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:05Z is after 2025-08-24T17:21:41Z]\\\\nI0219 21:00:06.001675 6346 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", Exte\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:20Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.924015 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.924063 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.924078 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.924099 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:20 crc kubenswrapper[4886]: I0219 21:00:20.924111 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:20Z","lastTransitionTime":"2026-02-19T21:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.027309 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.027378 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.027399 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.027424 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.027442 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:21Z","lastTransitionTime":"2026-02-19T21:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.032449 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nclwh_87d8f125-379b-4e5a-bedc-b55cf9edb00a/ovnkube-controller/2.log" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.033500 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nclwh_87d8f125-379b-4e5a-bedc-b55cf9edb00a/ovnkube-controller/1.log" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.036861 4886 generic.go:334] "Generic (PLEG): container finished" podID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerID="c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e" exitCode=1 Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.036945 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerDied","Data":"c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e"} Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.037018 4886 scope.go:117] "RemoveContainer" containerID="4835e9806734a6def638ebcc9b54de3267f5fef491aaf75a8eda408466d1ff9b" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.038580 4886 scope.go:117] "RemoveContainer" containerID="c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e" Feb 19 21:00:21 crc kubenswrapper[4886]: E0219 21:00:21.038843 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.059244 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:21Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.077258 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6hp27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6hp27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:21Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.093748 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:21Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.109723 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:21Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.130008 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.130059 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.130077 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.130101 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.130118 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:21Z","lastTransitionTime":"2026-02-19T21:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.143695 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4835e9806734a6def638ebcc9b54de3267f5fef491aaf75a8eda408466d1ff9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:06Z\\\",\\\"message\\\":\\\"r {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:05Z is after 2025-08-24T17:21:41Z]\\\\nI0219 21:00:06.001675 6346 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", Exte\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:20Z\\\",\\\"message\\\":\\\".go:1336] Added *v1.EgressIP event handler 8\\\\nI0219 21:00:20.590035 6579 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 21:00:20.590171 6579 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 21:00:20.590187 6579 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 21:00:20.590212 6579 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 21:00:20.590223 6579 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 21:00:20.590038 6579 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 21:00:20.590241 6579 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 21:00:20.590253 6579 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 21:00:20.590292 6579 factory.go:656] Stopping watch factory\\\\nI0219 21:00:20.590316 6579 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 21:00:20.590325 6579 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 21:00:20.590762 6579 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0219 21:00:20.590932 6579 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0219 21:00:20.591027 6579 ovnkube.go:599] Stopped ovnkube\\\\nI0219 21:00:20.591095 6579 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 21:00:20.591244 6579 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:21Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.164643 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:21Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.184540 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:21Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.201495 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:21Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.225878 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:21Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.232409 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.232467 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.232486 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.232510 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.232527 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:21Z","lastTransitionTime":"2026-02-19T21:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.240468 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f72327b-1f93-42b9-be5e-0fe0aee6035f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d8290bf97adfd00491ce8c604b2f3d964f62905337468cbf1ecec35dd961b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8842619bd40d1bf4d78e8a32e327199ff5c0e380c87417ee0d862bbdd459a683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r47dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:21Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.260696 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:21Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.277923 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:21Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.292115 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:21Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.309593 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:21Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.325962 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:21Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.336170 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.336208 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.336220 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.336236 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.336248 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:21Z","lastTransitionTime":"2026-02-19T21:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.346663 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:21Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.440014 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.440112 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.440190 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.440241 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.440291 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:21Z","lastTransitionTime":"2026-02-19T21:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.543856 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.543914 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.543931 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.543958 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.543980 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:21Z","lastTransitionTime":"2026-02-19T21:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.576426 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 12:05:17.237348079 +0000 UTC Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.647461 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.647529 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.647547 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.647621 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.647640 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:21Z","lastTransitionTime":"2026-02-19T21:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.750411 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.750460 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.750478 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.750501 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.750518 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:21Z","lastTransitionTime":"2026-02-19T21:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.853396 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.853453 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.853470 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.853492 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.853508 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:21Z","lastTransitionTime":"2026-02-19T21:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.957310 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.957367 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.957385 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.957409 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:21 crc kubenswrapper[4886]: I0219 21:00:21.957426 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:21Z","lastTransitionTime":"2026-02-19T21:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.045216 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nclwh_87d8f125-379b-4e5a-bedc-b55cf9edb00a/ovnkube-controller/2.log" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.050184 4886 scope.go:117] "RemoveContainer" containerID="c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e" Feb 19 21:00:22 crc kubenswrapper[4886]: E0219 21:00:22.050628 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.060877 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.060922 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.060940 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.060966 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.060982 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:22Z","lastTransitionTime":"2026-02-19T21:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.067137 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:22Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.082912 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:22Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.098922 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6hp27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6hp27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:22Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.115358 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:22Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.130828 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:22Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.160797 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:20Z\\\",\\\"message\\\":\\\".go:1336] Added *v1.EgressIP event handler 8\\\\nI0219 21:00:20.590035 6579 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 21:00:20.590171 6579 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 21:00:20.590187 6579 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 21:00:20.590212 6579 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 21:00:20.590223 6579 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 21:00:20.590038 6579 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 21:00:20.590241 6579 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 21:00:20.590253 6579 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 21:00:20.590292 6579 factory.go:656] Stopping watch factory\\\\nI0219 21:00:20.590316 6579 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 21:00:20.590325 6579 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 21:00:20.590762 6579 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0219 21:00:20.590932 6579 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0219 21:00:20.591027 6579 ovnkube.go:599] Stopped ovnkube\\\\nI0219 21:00:20.591095 6579 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 21:00:20.591244 6579 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:22Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.165144 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.165203 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.165227 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.165255 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.165314 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:22Z","lastTransitionTime":"2026-02-19T21:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.180868 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f72327b-1f93-42b9-be5e-0fe0aee6035f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d8290bf97adfd00491ce8c604b2f3d964f62905337468cbf1ecec35dd961b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8842619bd40d1bf4d78e8a32e327199ff5c0e380c87417ee0d862bbdd459a683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r47dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:22Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.203194 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:22Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.224208 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:22Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.246009 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:22Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.268034 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:22Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.268255 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.268300 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.268311 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.268327 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.268339 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:22Z","lastTransitionTime":"2026-02-19T21:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.288542 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:22Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.306627 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:22Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.325290 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:22Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.342886 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:22Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.359530 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:22Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.370575 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.370620 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.370635 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.370655 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.370670 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:22Z","lastTransitionTime":"2026-02-19T21:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.473876 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.473957 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.473976 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.474004 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.474025 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:22Z","lastTransitionTime":"2026-02-19T21:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.576691 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 07:23:41.937146172 +0000 UTC Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.577672 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.577752 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.577771 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.577801 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.577822 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:22Z","lastTransitionTime":"2026-02-19T21:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.601001 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.601093 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.601009 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:22 crc kubenswrapper[4886]: E0219 21:00:22.601356 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.601520 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:22 crc kubenswrapper[4886]: E0219 21:00:22.601729 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:22 crc kubenswrapper[4886]: E0219 21:00:22.601878 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:22 crc kubenswrapper[4886]: E0219 21:00:22.601965 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.681182 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.681597 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.681735 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.681859 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.681986 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:22Z","lastTransitionTime":"2026-02-19T21:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.785888 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.786323 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.786506 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.786638 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.786767 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:22Z","lastTransitionTime":"2026-02-19T21:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.890458 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.890783 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.890866 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.890945 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.891017 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:22Z","lastTransitionTime":"2026-02-19T21:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.993653 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.994002 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.994356 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.994736 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:22 crc kubenswrapper[4886]: I0219 21:00:22.994897 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:22Z","lastTransitionTime":"2026-02-19T21:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.097781 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.097853 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.097876 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.097903 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.097924 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:23Z","lastTransitionTime":"2026-02-19T21:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.201400 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.201785 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.201977 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.202133 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.202320 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:23Z","lastTransitionTime":"2026-02-19T21:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.305364 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.305434 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.305484 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.305515 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.305548 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:23Z","lastTransitionTime":"2026-02-19T21:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.408848 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.408919 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.408943 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.408971 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.408988 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:23Z","lastTransitionTime":"2026-02-19T21:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.512498 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.512587 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.512606 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.512654 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.512677 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:23Z","lastTransitionTime":"2026-02-19T21:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.577071 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 23:58:33.675845884 +0000 UTC Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.616821 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.616874 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.616898 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.616927 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.616950 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:23Z","lastTransitionTime":"2026-02-19T21:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.720678 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.720739 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.720763 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.720794 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.720817 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:23Z","lastTransitionTime":"2026-02-19T21:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.823171 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.823219 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.823238 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.823288 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.823306 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:23Z","lastTransitionTime":"2026-02-19T21:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.926389 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.926439 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.926457 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.926479 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:23 crc kubenswrapper[4886]: I0219 21:00:23.926496 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:23Z","lastTransitionTime":"2026-02-19T21:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.029393 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.029466 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.029491 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.029520 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.029542 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:24Z","lastTransitionTime":"2026-02-19T21:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.133082 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.133153 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.133172 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.133199 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.133216 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:24Z","lastTransitionTime":"2026-02-19T21:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.236622 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.236991 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.237157 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.237371 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.237575 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:24Z","lastTransitionTime":"2026-02-19T21:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.341063 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.341120 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.341137 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.341161 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.341177 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:24Z","lastTransitionTime":"2026-02-19T21:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.392110 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs\") pod \"network-metrics-daemon-6hp27\" (UID: \"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\") " pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:24 crc kubenswrapper[4886]: E0219 21:00:24.392318 4886 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 21:00:24 crc kubenswrapper[4886]: E0219 21:00:24.392581 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs podName:1160fb8a-b59d-4b7b-8632-d2b2ead9bb36 nodeName:}" failed. No retries permitted until 2026-02-19 21:00:40.392561547 +0000 UTC m=+71.020404607 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs") pod "network-metrics-daemon-6hp27" (UID: "1160fb8a-b59d-4b7b-8632-d2b2ead9bb36") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.443919 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.444044 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.444070 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.444100 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.444122 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:24Z","lastTransitionTime":"2026-02-19T21:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.446503 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.446708 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.446801 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.446883 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.446966 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:24Z","lastTransitionTime":"2026-02-19T21:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:24 crc kubenswrapper[4886]: E0219 21:00:24.465002 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:24Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.470439 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.470504 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.470522 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.470546 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.470595 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:24Z","lastTransitionTime":"2026-02-19T21:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:24 crc kubenswrapper[4886]: E0219 21:00:24.490960 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:24Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:24 crc kubenswrapper[4886]: E0219 21:00:24.493603 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:00:56.4935731 +0000 UTC m=+87.121416190 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.493806 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.494028 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:24 crc kubenswrapper[4886]: E0219 21:00:24.494179 4886 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 21:00:24 crc kubenswrapper[4886]: E0219 21:00:24.494285 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 21:00:56.494243546 +0000 UTC m=+87.122086636 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.494191 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:24 crc kubenswrapper[4886]: E0219 21:00:24.494660 4886 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 21:00:24 crc kubenswrapper[4886]: E0219 21:00:24.494825 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 21:00:56.49480325 +0000 UTC m=+87.122646370 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.495564 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.495728 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.495820 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.495924 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.496018 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:24Z","lastTransitionTime":"2026-02-19T21:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:24 crc kubenswrapper[4886]: E0219 21:00:24.510416 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:24Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.515678 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.515729 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.515745 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.515766 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.515781 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:24Z","lastTransitionTime":"2026-02-19T21:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:24 crc kubenswrapper[4886]: E0219 21:00:24.538886 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:24Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.545188 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.545240 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.545255 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.545298 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.545313 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:24Z","lastTransitionTime":"2026-02-19T21:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:24 crc kubenswrapper[4886]: E0219 21:00:24.563423 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:24Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:24 crc kubenswrapper[4886]: E0219 21:00:24.563737 4886 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.566314 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.566387 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.566410 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.566441 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.566459 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:24Z","lastTransitionTime":"2026-02-19T21:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.577812 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 09:28:53.479391234 +0000 UTC Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.595641 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.595778 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:24 crc kubenswrapper[4886]: E0219 21:00:24.595959 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 21:00:24 crc kubenswrapper[4886]: E0219 21:00:24.595995 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 21:00:24 crc kubenswrapper[4886]: E0219 21:00:24.596015 4886 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 21:00:24 crc kubenswrapper[4886]: E0219 21:00:24.596080 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 21:00:56.596061268 +0000 UTC m=+87.223904348 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 21:00:24 crc kubenswrapper[4886]: E0219 21:00:24.596579 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 21:00:24 crc kubenswrapper[4886]: E0219 21:00:24.596642 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 21:00:24 crc kubenswrapper[4886]: E0219 21:00:24.596664 4886 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 21:00:24 crc kubenswrapper[4886]: E0219 21:00:24.596768 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 21:00:56.596740705 +0000 UTC m=+87.224583785 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.600538 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.600581 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.600686 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:24 crc kubenswrapper[4886]: E0219 21:00:24.600691 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.600717 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:24 crc kubenswrapper[4886]: E0219 21:00:24.600824 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:24 crc kubenswrapper[4886]: E0219 21:00:24.600925 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:24 crc kubenswrapper[4886]: E0219 21:00:24.601033 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.669872 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.669939 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.669955 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.669981 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.670000 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:24Z","lastTransitionTime":"2026-02-19T21:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.773176 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.773247 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.773307 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.773340 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.773362 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:24Z","lastTransitionTime":"2026-02-19T21:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.876106 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.876163 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.876179 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.876201 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.876215 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:24Z","lastTransitionTime":"2026-02-19T21:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.979198 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.979244 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.979271 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.979290 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:24 crc kubenswrapper[4886]: I0219 21:00:24.979301 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:24Z","lastTransitionTime":"2026-02-19T21:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.082145 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.082208 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.082230 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.082291 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.082319 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:25Z","lastTransitionTime":"2026-02-19T21:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.185782 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.186478 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.186511 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.186542 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.186565 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:25Z","lastTransitionTime":"2026-02-19T21:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.289488 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.289551 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.289568 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.289591 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.289609 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:25Z","lastTransitionTime":"2026-02-19T21:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.392405 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.392462 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.392478 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.392502 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.392519 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:25Z","lastTransitionTime":"2026-02-19T21:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.495235 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.495337 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.495366 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.495391 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.495411 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:25Z","lastTransitionTime":"2026-02-19T21:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.578220 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 08:50:18.898139635 +0000 UTC Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.598952 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.599019 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.599038 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.599063 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.599080 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:25Z","lastTransitionTime":"2026-02-19T21:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.702456 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.702655 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.702685 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.702713 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.702731 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:25Z","lastTransitionTime":"2026-02-19T21:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.805970 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.806023 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.806041 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.806064 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.806081 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:25Z","lastTransitionTime":"2026-02-19T21:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.909255 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.909356 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.909377 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.909404 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:25 crc kubenswrapper[4886]: I0219 21:00:25.909428 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:25Z","lastTransitionTime":"2026-02-19T21:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.012193 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.012253 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.012298 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.012322 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.012340 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:26Z","lastTransitionTime":"2026-02-19T21:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.077722 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.093766 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.095426 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:26Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.112871 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:26Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.114781 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.114844 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.114862 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.114888 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.114908 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:26Z","lastTransitionTime":"2026-02-19T21:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.128681 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6hp27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6hp27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:26Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.146776 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:26Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.161304 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:26Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.193043 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:20Z\\\",\\\"message\\\":\\\".go:1336] Added *v1.EgressIP event handler 8\\\\nI0219 21:00:20.590035 6579 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 21:00:20.590171 6579 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 21:00:20.590187 6579 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 21:00:20.590212 6579 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 21:00:20.590223 6579 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 21:00:20.590038 6579 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 21:00:20.590241 6579 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 21:00:20.590253 6579 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 21:00:20.590292 6579 factory.go:656] Stopping watch factory\\\\nI0219 21:00:20.590316 6579 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 21:00:20.590325 6579 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 21:00:20.590762 6579 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0219 21:00:20.590932 6579 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0219 21:00:20.591027 6579 ovnkube.go:599] Stopped ovnkube\\\\nI0219 21:00:20.591095 6579 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 21:00:20.591244 6579 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:26Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.217307 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:26Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.218547 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.218590 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.218607 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.218634 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.218655 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:26Z","lastTransitionTime":"2026-02-19T21:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.235087 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f72327b-1f93-42b9-be5e-0fe0aee6035f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d8290bf97adfd00491ce8c604b2f3d964f62905337468cbf1ecec35dd961b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8842619bd40d1bf4d78e8a32e327199ff5c0e380c87417ee0d862bbdd459a683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r47dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:26Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.259194 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:26Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.281054 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:26Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.303363 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:26Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.321943 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.322036 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.322056 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.322089 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.322110 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:26Z","lastTransitionTime":"2026-02-19T21:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.326794 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:26Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.349096 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:26Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.368548 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:26Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.388674 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:26Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.406802 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:26Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.425503 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.425569 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.425589 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.425616 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.425634 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:26Z","lastTransitionTime":"2026-02-19T21:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.529339 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.529408 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.529428 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.529454 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.529474 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:26Z","lastTransitionTime":"2026-02-19T21:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.578499 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 15:15:36.739076664 +0000 UTC Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.600933 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.601112 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:26 crc kubenswrapper[4886]: E0219 21:00:26.601392 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.601417 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.601457 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:26 crc kubenswrapper[4886]: E0219 21:00:26.601694 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:26 crc kubenswrapper[4886]: E0219 21:00:26.601878 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:26 crc kubenswrapper[4886]: E0219 21:00:26.602068 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.632671 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.632719 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.632731 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.632748 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.632762 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:26Z","lastTransitionTime":"2026-02-19T21:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.735816 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.735921 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.735939 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.735967 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.735988 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:26Z","lastTransitionTime":"2026-02-19T21:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.839341 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.839467 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.839503 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.839556 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.839590 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:26Z","lastTransitionTime":"2026-02-19T21:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.942593 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.942732 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.942754 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.942780 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:26 crc kubenswrapper[4886]: I0219 21:00:26.942809 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:26Z","lastTransitionTime":"2026-02-19T21:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.045822 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.045864 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.045877 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.045899 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.045915 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:27Z","lastTransitionTime":"2026-02-19T21:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.148732 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.148824 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.148851 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.148888 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.148910 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:27Z","lastTransitionTime":"2026-02-19T21:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.252035 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.252077 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.252087 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.252104 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.252116 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:27Z","lastTransitionTime":"2026-02-19T21:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.355069 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.355139 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.355163 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.355195 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.355222 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:27Z","lastTransitionTime":"2026-02-19T21:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.457847 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.457896 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.457914 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.457937 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.457955 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:27Z","lastTransitionTime":"2026-02-19T21:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.561141 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.561186 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.561203 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.561226 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.561242 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:27Z","lastTransitionTime":"2026-02-19T21:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.579138 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:23:57.976275724 +0000 UTC Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.663764 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.663862 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.663876 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.663906 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.663936 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:27Z","lastTransitionTime":"2026-02-19T21:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.766908 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.766975 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.766996 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.767028 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.767051 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:27Z","lastTransitionTime":"2026-02-19T21:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.869963 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.870027 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.870045 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.870071 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.870092 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:27Z","lastTransitionTime":"2026-02-19T21:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.972898 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.972972 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.972990 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.973035 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:27 crc kubenswrapper[4886]: I0219 21:00:27.973053 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:27Z","lastTransitionTime":"2026-02-19T21:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.076348 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.076406 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.076423 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.076445 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.076461 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:28Z","lastTransitionTime":"2026-02-19T21:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.179514 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.179578 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.179594 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.179618 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.179634 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:28Z","lastTransitionTime":"2026-02-19T21:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.282774 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.282840 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.282856 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.282882 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.282901 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:28Z","lastTransitionTime":"2026-02-19T21:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.385962 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.386018 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.386034 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.386057 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.386111 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:28Z","lastTransitionTime":"2026-02-19T21:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.488451 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.488516 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.488535 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.488563 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.488582 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:28Z","lastTransitionTime":"2026-02-19T21:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.579355 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 14:25:32.717682473 +0000 UTC Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.591842 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.591910 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.591950 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.591977 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.591994 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:28Z","lastTransitionTime":"2026-02-19T21:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.600765 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.600813 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.600827 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:28 crc kubenswrapper[4886]: E0219 21:00:28.600931 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:28 crc kubenswrapper[4886]: E0219 21:00:28.601098 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:28 crc kubenswrapper[4886]: E0219 21:00:28.601335 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.601497 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:28 crc kubenswrapper[4886]: E0219 21:00:28.601628 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.694901 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.694978 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.695003 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.695566 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.695862 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:28Z","lastTransitionTime":"2026-02-19T21:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.799081 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.799154 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.799172 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.799197 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.799220 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:28Z","lastTransitionTime":"2026-02-19T21:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.902176 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.902255 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.902310 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.902340 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:28 crc kubenswrapper[4886]: I0219 21:00:28.902358 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:28Z","lastTransitionTime":"2026-02-19T21:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.005306 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.005363 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.005380 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.005403 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.005420 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:29Z","lastTransitionTime":"2026-02-19T21:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.108399 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.108462 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.108478 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.108503 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.108521 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:29Z","lastTransitionTime":"2026-02-19T21:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.211885 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.211933 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.211951 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.211974 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.211990 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:29Z","lastTransitionTime":"2026-02-19T21:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.317379 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.317434 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.317451 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.317473 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.317490 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:29Z","lastTransitionTime":"2026-02-19T21:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.420866 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.420936 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.420958 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.420988 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.421013 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:29Z","lastTransitionTime":"2026-02-19T21:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.523747 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.523811 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.523835 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.523862 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.523886 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:29Z","lastTransitionTime":"2026-02-19T21:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.580033 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 09:25:47.67343541 +0000 UTC Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.626443 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.626484 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.626632 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.626658 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.626675 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:29Z","lastTransitionTime":"2026-02-19T21:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.728906 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.728971 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.728988 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.729011 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.729027 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:29Z","lastTransitionTime":"2026-02-19T21:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.831729 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.831794 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.831815 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.831842 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.831864 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:29Z","lastTransitionTime":"2026-02-19T21:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.935316 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.935399 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.935434 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.935466 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:29 crc kubenswrapper[4886]: I0219 21:00:29.935493 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:29Z","lastTransitionTime":"2026-02-19T21:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.037755 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.037804 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.037823 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.037845 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.037864 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:30Z","lastTransitionTime":"2026-02-19T21:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.141245 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.141334 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.141355 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.141380 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.141398 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:30Z","lastTransitionTime":"2026-02-19T21:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.145873 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.162235 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.177919 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.191508 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6hp27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6hp27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.209633 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a730021c-71f9-4d10-b727-2ca3996f7315\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44c38ce1d45d528816fef389df305927a56ce194d7a4f9008c53ba1c8c3872d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30802e3825dde5e5a0fcaa97b68b101cbf379a89ae031f9e6825ab9d8300ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6374bb4d3110394978d5438ec9d728e7d5a0b926555d6aa0cff5582ff326ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006c62d0e5822ff5be8928565b24d3928faad03a7eed4c1e00cb7f97170723a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://006c62d0e5822ff5be8928565b24d3928faad03a7eed4c1e00cb7f97170723a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.227519 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.243952 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.244025 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.244049 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.244078 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.244101 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:30Z","lastTransitionTime":"2026-02-19T21:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.245184 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.268098 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:20Z\\\",\\\"message\\\":\\\".go:1336] Added *v1.EgressIP event handler 8\\\\nI0219 21:00:20.590035 6579 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 21:00:20.590171 6579 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 21:00:20.590187 6579 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 21:00:20.590212 6579 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 21:00:20.590223 6579 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 21:00:20.590038 6579 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 21:00:20.590241 6579 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 21:00:20.590253 6579 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 21:00:20.590292 6579 factory.go:656] Stopping watch factory\\\\nI0219 21:00:20.590316 6579 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 21:00:20.590325 6579 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 21:00:20.590762 6579 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0219 21:00:20.590932 6579 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0219 21:00:20.591027 6579 ovnkube.go:599] Stopped ovnkube\\\\nI0219 21:00:20.591095 6579 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 21:00:20.591244 6579 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.289238 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.305469 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f72327b-1f93-42b9-be5e-0fe0aee6035f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d8290bf97adfd00491ce8c604b2f3d964f62905337468cbf1ecec35dd961b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8842619bd40d1bf4d78e8a32e327199ff5c0e380c87417ee0d862bbdd459a683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r47dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.325456 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.343685 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.346194 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.346245 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.346319 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.346344 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.346362 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:30Z","lastTransitionTime":"2026-02-19T21:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.363448 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.380898 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.399713 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.418720 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.437010 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.449901 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.449968 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.449991 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.450018 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.450036 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:30Z","lastTransitionTime":"2026-02-19T21:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.452394 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.553498 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.553570 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.553594 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.553625 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.553646 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:30Z","lastTransitionTime":"2026-02-19T21:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.581337 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 16:16:47.298575589 +0000 UTC Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.600108 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.600155 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:30 crc kubenswrapper[4886]: E0219 21:00:30.600288 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:30 crc kubenswrapper[4886]: E0219 21:00:30.600530 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.600509 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:30 crc kubenswrapper[4886]: E0219 21:00:30.600708 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.600973 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:30 crc kubenswrapper[4886]: E0219 21:00:30.601089 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.625483 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.645973 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.657340 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.657626 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.657658 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.657682 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.657699 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:30Z","lastTransitionTime":"2026-02-19T21:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.663846 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.686688 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.703145 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f72327b-1f93-42b9-be5e-0fe0aee6035f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d8290bf97adfd00491ce8c604b2f3d964f62905337468cbf1ecec35dd961b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8842619bd40d1bf4d78e8a32e327199ff5c0e380c87417ee0d862bbdd459a683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r47dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.725342 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.742102 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.757074 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.764657 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.764705 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.764718 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.764736 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.764747 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:30Z","lastTransitionTime":"2026-02-19T21:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.770009 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.788995 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.798403 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.812793 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.828181 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6hp27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6hp27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.845121 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a730021c-71f9-4d10-b727-2ca3996f7315\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44c38ce1d45d528816fef389df305927a56ce194d7a4f9008c53ba1c8c3872d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30802e3825dde5e5a0fcaa97b68b101cbf379a89ae031f9e6825ab9d8300ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6374bb4d3110394978d5438ec9d728e7d5a0b926555d6aa0cff5582ff326ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006c62d0e5822ff5be8928565b24d3928faad03a7eed4c1e00cb7f97170723a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://006c62d0e5822ff5be8928565b24d3928faad03a7eed4c1e00cb7f97170723a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.861686 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.867198 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.867248 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.867291 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.867317 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.867337 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:30Z","lastTransitionTime":"2026-02-19T21:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.875286 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.898471 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:20Z\\\",\\\"message\\\":\\\".go:1336] Added *v1.EgressIP event handler 8\\\\nI0219 21:00:20.590035 6579 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 21:00:20.590171 6579 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 21:00:20.590187 6579 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 21:00:20.590212 6579 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 21:00:20.590223 6579 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 21:00:20.590038 6579 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 21:00:20.590241 6579 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 21:00:20.590253 6579 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 21:00:20.590292 6579 factory.go:656] Stopping watch factory\\\\nI0219 21:00:20.590316 6579 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 21:00:20.590325 6579 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 21:00:20.590762 6579 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0219 21:00:20.590932 6579 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0219 21:00:20.591027 6579 ovnkube.go:599] Stopped ovnkube\\\\nI0219 21:00:20.591095 6579 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 21:00:20.591244 6579 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:30Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.970316 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.970352 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.970375 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.970391 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:30 crc kubenswrapper[4886]: I0219 21:00:30.970401 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:30Z","lastTransitionTime":"2026-02-19T21:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.073398 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.073497 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.073516 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.073544 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.073563 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:31Z","lastTransitionTime":"2026-02-19T21:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.181647 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.181712 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.181730 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.181761 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.181780 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:31Z","lastTransitionTime":"2026-02-19T21:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.283809 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.283849 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.283860 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.283877 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.283889 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:31Z","lastTransitionTime":"2026-02-19T21:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.386148 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.386183 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.386193 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.386210 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.386220 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:31Z","lastTransitionTime":"2026-02-19T21:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.489818 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.489877 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.489894 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.489921 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.489937 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:31Z","lastTransitionTime":"2026-02-19T21:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.581761 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 09:41:52.257722956 +0000 UTC Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.593123 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.593164 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.593177 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.593195 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.593207 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:31Z","lastTransitionTime":"2026-02-19T21:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.696717 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.696759 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.696771 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.696788 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.696799 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:31Z","lastTransitionTime":"2026-02-19T21:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.799136 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.799186 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.799203 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.799226 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.799242 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:31Z","lastTransitionTime":"2026-02-19T21:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.902208 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.902335 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.902362 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.902773 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:31 crc kubenswrapper[4886]: I0219 21:00:31.902827 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:31Z","lastTransitionTime":"2026-02-19T21:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.005652 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.005743 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.005769 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.005803 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.005827 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:32Z","lastTransitionTime":"2026-02-19T21:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.109243 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.109328 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.109346 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.109374 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.109391 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:32Z","lastTransitionTime":"2026-02-19T21:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.212356 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.212425 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.212443 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.212468 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.212485 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:32Z","lastTransitionTime":"2026-02-19T21:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.315750 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.315877 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.315900 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.315934 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.315956 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:32Z","lastTransitionTime":"2026-02-19T21:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.419820 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.419882 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.419898 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.419922 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.419946 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:32Z","lastTransitionTime":"2026-02-19T21:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.523055 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.523113 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.523127 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.523150 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.523167 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:32Z","lastTransitionTime":"2026-02-19T21:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.582760 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 12:37:12.891279346 +0000 UTC Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.600560 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.600573 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.600724 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:32 crc kubenswrapper[4886]: E0219 21:00:32.600956 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.601017 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:32 crc kubenswrapper[4886]: E0219 21:00:32.601167 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:32 crc kubenswrapper[4886]: E0219 21:00:32.601356 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:32 crc kubenswrapper[4886]: E0219 21:00:32.601517 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.626284 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.626323 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.626334 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.626352 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.626365 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:32Z","lastTransitionTime":"2026-02-19T21:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.730910 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.730992 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.731018 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.731051 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.731086 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:32Z","lastTransitionTime":"2026-02-19T21:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.835388 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.835462 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.835479 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.835503 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.835521 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:32Z","lastTransitionTime":"2026-02-19T21:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.938990 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.939048 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.939066 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.939089 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:32 crc kubenswrapper[4886]: I0219 21:00:32.939108 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:32Z","lastTransitionTime":"2026-02-19T21:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.042457 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.042554 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.042572 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.042593 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.042610 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:33Z","lastTransitionTime":"2026-02-19T21:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.145397 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.145459 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.145478 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.145505 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.145523 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:33Z","lastTransitionTime":"2026-02-19T21:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.248430 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.248611 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.248655 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.248683 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.248705 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:33Z","lastTransitionTime":"2026-02-19T21:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.351554 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.351614 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.351631 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.351655 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.351674 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:33Z","lastTransitionTime":"2026-02-19T21:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.454521 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.454598 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.454621 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.454649 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.454672 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:33Z","lastTransitionTime":"2026-02-19T21:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.557912 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.558004 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.558024 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.558054 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.558076 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:33Z","lastTransitionTime":"2026-02-19T21:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.583438 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 05:16:46.486702829 +0000 UTC Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.601438 4886 scope.go:117] "RemoveContainer" containerID="c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e" Feb 19 21:00:33 crc kubenswrapper[4886]: E0219 21:00:33.601749 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.661922 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.661995 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.662012 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.662034 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.662051 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:33Z","lastTransitionTime":"2026-02-19T21:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.765366 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.765436 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.765454 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.765478 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.765506 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:33Z","lastTransitionTime":"2026-02-19T21:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.869772 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.869825 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.869865 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.869895 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.869916 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:33Z","lastTransitionTime":"2026-02-19T21:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.973008 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.973063 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.973085 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.973114 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:33 crc kubenswrapper[4886]: I0219 21:00:33.973135 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:33Z","lastTransitionTime":"2026-02-19T21:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.076121 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.076184 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.076201 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.076254 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.076328 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:34Z","lastTransitionTime":"2026-02-19T21:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.178733 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.178769 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.178780 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.178796 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.178809 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:34Z","lastTransitionTime":"2026-02-19T21:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.282245 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.282333 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.282351 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.282378 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.282395 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:34Z","lastTransitionTime":"2026-02-19T21:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.387179 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.387212 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.387223 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.387237 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.387248 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:34Z","lastTransitionTime":"2026-02-19T21:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.489625 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.489694 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.489709 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.489731 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.489746 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:34Z","lastTransitionTime":"2026-02-19T21:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.583960 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 21:24:07.118784272 +0000 UTC Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.593368 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.593416 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.593427 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.593444 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.593455 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:34Z","lastTransitionTime":"2026-02-19T21:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.600094 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.600165 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.600099 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.600099 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:34 crc kubenswrapper[4886]: E0219 21:00:34.600232 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:34 crc kubenswrapper[4886]: E0219 21:00:34.600390 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:34 crc kubenswrapper[4886]: E0219 21:00:34.600522 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:34 crc kubenswrapper[4886]: E0219 21:00:34.600616 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.695821 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.695887 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.695912 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.695939 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.695956 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:34Z","lastTransitionTime":"2026-02-19T21:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.765445 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.765485 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.765499 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.765517 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.765529 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:34Z","lastTransitionTime":"2026-02-19T21:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:34 crc kubenswrapper[4886]: E0219 21:00:34.777986 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:34Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.782737 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.782780 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.782794 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.782815 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.782829 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:34Z","lastTransitionTime":"2026-02-19T21:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:34 crc kubenswrapper[4886]: E0219 21:00:34.800641 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:34Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.805478 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.805518 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.805530 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.805547 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.805559 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:34Z","lastTransitionTime":"2026-02-19T21:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:34 crc kubenswrapper[4886]: E0219 21:00:34.824819 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:34Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.829109 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.829139 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.829149 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.829165 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.829177 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:34Z","lastTransitionTime":"2026-02-19T21:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:34 crc kubenswrapper[4886]: E0219 21:00:34.853923 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:34Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.858809 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.858916 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.858954 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.858979 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.858998 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:34Z","lastTransitionTime":"2026-02-19T21:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:34 crc kubenswrapper[4886]: E0219 21:00:34.874997 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:34Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:34 crc kubenswrapper[4886]: E0219 21:00:34.875217 4886 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.877249 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.877379 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.877429 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.877453 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.877470 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:34Z","lastTransitionTime":"2026-02-19T21:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.980855 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.980920 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.980937 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.980962 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:34 crc kubenswrapper[4886]: I0219 21:00:34.980979 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:34Z","lastTransitionTime":"2026-02-19T21:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.083317 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.083373 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.083390 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.083413 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.083429 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:35Z","lastTransitionTime":"2026-02-19T21:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.186622 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.186685 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.186700 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.186725 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.186743 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:35Z","lastTransitionTime":"2026-02-19T21:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.289352 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.289417 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.289450 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.289506 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.289535 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:35Z","lastTransitionTime":"2026-02-19T21:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.393693 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.393784 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.393803 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.393827 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.393844 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:35Z","lastTransitionTime":"2026-02-19T21:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.496830 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.496890 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.496908 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.496935 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.496980 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:35Z","lastTransitionTime":"2026-02-19T21:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.584930 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 08:41:54.525747159 +0000 UTC Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.599979 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.600036 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.600053 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.600075 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.600095 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:35Z","lastTransitionTime":"2026-02-19T21:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.702760 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.702810 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.702823 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.702840 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.702854 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:35Z","lastTransitionTime":"2026-02-19T21:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.806080 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.806131 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.806147 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.806169 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.806185 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:35Z","lastTransitionTime":"2026-02-19T21:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.908651 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.908711 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.908725 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.908741 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:35 crc kubenswrapper[4886]: I0219 21:00:35.908751 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:35Z","lastTransitionTime":"2026-02-19T21:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.011752 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.011798 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.011813 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.011831 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.011845 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:36Z","lastTransitionTime":"2026-02-19T21:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.120251 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.120331 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.120344 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.120359 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.120369 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:36Z","lastTransitionTime":"2026-02-19T21:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.223485 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.223532 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.223548 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.223574 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.223590 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:36Z","lastTransitionTime":"2026-02-19T21:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.326407 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.326451 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.326468 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.326490 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.326507 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:36Z","lastTransitionTime":"2026-02-19T21:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.428872 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.428913 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.428924 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.428939 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.428949 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:36Z","lastTransitionTime":"2026-02-19T21:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.531011 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.531103 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.531122 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.531145 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.531164 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:36Z","lastTransitionTime":"2026-02-19T21:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.585568 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 17:58:08.481009914 +0000 UTC Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.601008 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.601095 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.601019 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.601295 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:36 crc kubenswrapper[4886]: E0219 21:00:36.601246 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:36 crc kubenswrapper[4886]: E0219 21:00:36.601392 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:36 crc kubenswrapper[4886]: E0219 21:00:36.601450 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:36 crc kubenswrapper[4886]: E0219 21:00:36.601547 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.633709 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.633770 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.633791 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.633820 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.633840 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:36Z","lastTransitionTime":"2026-02-19T21:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.735980 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.736020 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.736029 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.736045 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.736056 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:36Z","lastTransitionTime":"2026-02-19T21:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.838515 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.838564 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.838576 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.838595 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.838608 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:36Z","lastTransitionTime":"2026-02-19T21:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.940654 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.940706 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.940715 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.940728 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:36 crc kubenswrapper[4886]: I0219 21:00:36.940737 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:36Z","lastTransitionTime":"2026-02-19T21:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.042553 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.042613 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.042630 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.042652 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.042669 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:37Z","lastTransitionTime":"2026-02-19T21:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.144573 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.144612 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.144623 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.144639 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.144650 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:37Z","lastTransitionTime":"2026-02-19T21:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.246797 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.246850 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.246867 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.246884 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.246896 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:37Z","lastTransitionTime":"2026-02-19T21:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.349902 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.349998 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.350016 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.350040 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.350059 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:37Z","lastTransitionTime":"2026-02-19T21:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.452493 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.452532 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.452543 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.452559 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.452570 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:37Z","lastTransitionTime":"2026-02-19T21:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.555197 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.555242 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.555254 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.555285 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.555298 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:37Z","lastTransitionTime":"2026-02-19T21:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.586603 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 19:58:22.790160062 +0000 UTC Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.615802 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.657772 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.657807 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.657830 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.657842 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.657852 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:37Z","lastTransitionTime":"2026-02-19T21:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.760487 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.760525 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.760533 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.760547 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.760556 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:37Z","lastTransitionTime":"2026-02-19T21:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.862797 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.862825 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.862832 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.862844 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.862852 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:37Z","lastTransitionTime":"2026-02-19T21:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.965107 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.965137 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.965145 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.965158 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:37 crc kubenswrapper[4886]: I0219 21:00:37.965167 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:37Z","lastTransitionTime":"2026-02-19T21:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.066431 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.066454 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.066463 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.066474 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.066485 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:38Z","lastTransitionTime":"2026-02-19T21:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.167875 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.167908 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.167917 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.167928 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.167948 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:38Z","lastTransitionTime":"2026-02-19T21:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.270661 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.270729 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.270752 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.270780 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.270803 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:38Z","lastTransitionTime":"2026-02-19T21:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.373518 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.373553 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.373565 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.373582 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.373593 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:38Z","lastTransitionTime":"2026-02-19T21:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.475205 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.475236 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.475244 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.475277 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.475286 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:38Z","lastTransitionTime":"2026-02-19T21:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.577661 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.577697 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.577707 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.577723 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.577735 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:38Z","lastTransitionTime":"2026-02-19T21:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.587131 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 08:17:43.243675921 +0000 UTC Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.600559 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.600573 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.600573 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.600629 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:38 crc kubenswrapper[4886]: E0219 21:00:38.600726 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:38 crc kubenswrapper[4886]: E0219 21:00:38.600807 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:38 crc kubenswrapper[4886]: E0219 21:00:38.600883 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:38 crc kubenswrapper[4886]: E0219 21:00:38.600910 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.680377 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.680421 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.680433 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.680446 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.680455 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:38Z","lastTransitionTime":"2026-02-19T21:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.782875 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.782953 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.782976 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.783004 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.783026 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:38Z","lastTransitionTime":"2026-02-19T21:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.885253 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.885289 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.885297 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.885308 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.885318 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:38Z","lastTransitionTime":"2026-02-19T21:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.988099 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.988142 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.988154 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.988170 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:38 crc kubenswrapper[4886]: I0219 21:00:38.988181 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:38Z","lastTransitionTime":"2026-02-19T21:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.090948 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.091003 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.091019 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.091044 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.091060 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:39Z","lastTransitionTime":"2026-02-19T21:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.194341 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.194386 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.194397 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.194414 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.194426 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:39Z","lastTransitionTime":"2026-02-19T21:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.296385 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.296444 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.296465 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.296534 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.296554 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:39Z","lastTransitionTime":"2026-02-19T21:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.398255 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.398319 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.398330 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.398348 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.398364 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:39Z","lastTransitionTime":"2026-02-19T21:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.500900 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.500927 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.500935 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.500948 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.500958 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:39Z","lastTransitionTime":"2026-02-19T21:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.587745 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 11:24:42.936037327 +0000 UTC Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.603233 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.603276 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.603285 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.603302 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.603311 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:39Z","lastTransitionTime":"2026-02-19T21:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.705610 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.705688 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.705705 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.705723 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.705738 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:39Z","lastTransitionTime":"2026-02-19T21:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.807514 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.807552 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.807566 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.807581 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.807591 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:39Z","lastTransitionTime":"2026-02-19T21:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.910024 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.910083 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.910098 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.910120 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:39 crc kubenswrapper[4886]: I0219 21:00:39.910135 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:39Z","lastTransitionTime":"2026-02-19T21:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.012315 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.012384 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.012402 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.012427 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.012445 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:40Z","lastTransitionTime":"2026-02-19T21:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.114299 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.114361 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.114379 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.114402 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.114419 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:40Z","lastTransitionTime":"2026-02-19T21:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.217228 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.217331 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.217356 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.217381 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.217398 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:40Z","lastTransitionTime":"2026-02-19T21:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.320087 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.320117 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.320125 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.320138 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.320147 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:40Z","lastTransitionTime":"2026-02-19T21:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.423935 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.423965 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.423974 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.423987 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.423998 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:40Z","lastTransitionTime":"2026-02-19T21:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.472320 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs\") pod \"network-metrics-daemon-6hp27\" (UID: \"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\") " pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:40 crc kubenswrapper[4886]: E0219 21:00:40.472628 4886 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 21:00:40 crc kubenswrapper[4886]: E0219 21:00:40.473012 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs podName:1160fb8a-b59d-4b7b-8632-d2b2ead9bb36 nodeName:}" failed. No retries permitted until 2026-02-19 21:01:12.472973239 +0000 UTC m=+103.100816339 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs") pod "network-metrics-daemon-6hp27" (UID: "1160fb8a-b59d-4b7b-8632-d2b2ead9bb36") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.526418 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.526439 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.526459 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.526470 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.526477 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:40Z","lastTransitionTime":"2026-02-19T21:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.587994 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 00:17:33.538730044 +0000 UTC Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.600322 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.600418 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.600413 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.600457 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:40 crc kubenswrapper[4886]: E0219 21:00:40.601148 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:40 crc kubenswrapper[4886]: E0219 21:00:40.601388 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:40 crc kubenswrapper[4886]: E0219 21:00:40.601300 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:40 crc kubenswrapper[4886]: E0219 21:00:40.601531 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.613972 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:40Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.627708 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.627734 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.627743 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.627752 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.627760 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:40Z","lastTransitionTime":"2026-02-19T21:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.628806 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:40Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.647752 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:20Z\\\",\\\"message\\\":\\\".go:1336] Added *v1.EgressIP event handler 8\\\\nI0219 21:00:20.590035 6579 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 21:00:20.590171 6579 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 21:00:20.590187 6579 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 21:00:20.590212 6579 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 21:00:20.590223 6579 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 21:00:20.590038 6579 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 21:00:20.590241 6579 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 21:00:20.590253 6579 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 21:00:20.590292 6579 factory.go:656] Stopping watch factory\\\\nI0219 21:00:20.590316 6579 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 21:00:20.590325 6579 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 21:00:20.590762 6579 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0219 21:00:20.590932 6579 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0219 21:00:20.591027 6579 ovnkube.go:599] Stopped ovnkube\\\\nI0219 21:00:20.591095 6579 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 21:00:20.591244 6579 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:40Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.664297 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a730021c-71f9-4d10-b727-2ca3996f7315\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44c38ce1d45d528816fef389df305927a56ce194d7a4f9008c53ba1c8c3872d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30802e3825dde5e5a0fcaa97b68b101cbf379a89ae031f9e6825ab9d8300ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6374bb4d3110394978d5438ec9d728e7d5a0b926555d6aa0cff5582ff326ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006c62d0e5822ff5be8928565b24d3928faad03a7eed4c1e00cb7f97170723a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://006c62d0e5822ff5be8928565b24d3928faad03a7eed4c1e00cb7f97170723a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:40Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.682794 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:40Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.704184 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:40Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.722864 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:40Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.729878 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.729934 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.729953 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.729976 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.729995 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:40Z","lastTransitionTime":"2026-02-19T21:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.748876 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:40Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.765859 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f72327b-1f93-42b9-be5e-0fe0aee6035f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d8290bf97adfd00491ce8c604b2f3d964f62905337468cbf1ecec35dd961b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8842619bd40d1bf4d78e8a32e327199ff5c0e380c87417ee0d862bbdd459a683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r47dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:40Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.781596 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1876cc4e-c01b-4f98-9d4b-36c0ef5caf78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d23024e5a3956e078a2acd11d0c498d2eec026c002a01d0f24a8e08622b20e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://111bd67d90ee569c1b2e62757957dc4e871121561ff4eb507278cd8cfc637412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111bd67d90ee569c1b2e62757957dc4e871121561ff4eb507278cd8cfc637412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:40Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.798497 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:40Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.811318 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:40Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.823018 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:40Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.833647 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.833673 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.833682 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.833694 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.833703 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:40Z","lastTransitionTime":"2026-02-19T21:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.840661 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:40Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.854435 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:40Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.865569 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:40Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.878991 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:40Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.889906 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6hp27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6hp27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:40Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.936541 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.936748 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.936866 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.937000 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:40 crc kubenswrapper[4886]: I0219 21:00:40.937123 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:40Z","lastTransitionTime":"2026-02-19T21:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.040579 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.040641 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.040658 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.040683 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.040701 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:41Z","lastTransitionTime":"2026-02-19T21:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.143024 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.143084 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.143100 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.143122 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.143141 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:41Z","lastTransitionTime":"2026-02-19T21:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.246373 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.246453 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.246477 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.246506 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.246528 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:41Z","lastTransitionTime":"2026-02-19T21:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.349251 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.349318 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.349328 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.349365 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.349377 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:41Z","lastTransitionTime":"2026-02-19T21:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.451588 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.451669 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.451681 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.451697 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.451708 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:41Z","lastTransitionTime":"2026-02-19T21:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.554140 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.554215 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.554228 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.554249 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.554294 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:41Z","lastTransitionTime":"2026-02-19T21:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.588923 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 04:34:08.242309918 +0000 UTC Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.656524 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.656582 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.656599 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.656619 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.656635 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:41Z","lastTransitionTime":"2026-02-19T21:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.758710 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.758780 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.758793 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.758838 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.758850 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:41Z","lastTransitionTime":"2026-02-19T21:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.861594 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.861626 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.861634 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.861647 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.861655 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:41Z","lastTransitionTime":"2026-02-19T21:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.963817 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.963837 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.963845 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.963854 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:41 crc kubenswrapper[4886]: I0219 21:00:41.963862 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:41Z","lastTransitionTime":"2026-02-19T21:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.066132 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.066183 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.066194 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.066213 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.066228 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:42Z","lastTransitionTime":"2026-02-19T21:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.129421 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rnffz_83f8fca5-68c6-4300-b2d8-64a58bf92a64/kube-multus/0.log" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.129479 4886 generic.go:334] "Generic (PLEG): container finished" podID="83f8fca5-68c6-4300-b2d8-64a58bf92a64" containerID="3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10" exitCode=1 Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.129513 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rnffz" event={"ID":"83f8fca5-68c6-4300-b2d8-64a58bf92a64","Type":"ContainerDied","Data":"3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10"} Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.129900 4886 scope.go:117] "RemoveContainer" containerID="3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.151833 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:20Z\\\",\\\"message\\\":\\\".go:1336] Added *v1.EgressIP event handler 8\\\\nI0219 21:00:20.590035 6579 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 21:00:20.590171 6579 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 21:00:20.590187 6579 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 21:00:20.590212 6579 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 21:00:20.590223 6579 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 21:00:20.590038 6579 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 21:00:20.590241 6579 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 21:00:20.590253 6579 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 21:00:20.590292 6579 factory.go:656] Stopping watch factory\\\\nI0219 21:00:20.590316 6579 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 21:00:20.590325 6579 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 21:00:20.590762 6579 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0219 21:00:20.590932 6579 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0219 21:00:20.591027 6579 ovnkube.go:599] Stopped ovnkube\\\\nI0219 21:00:20.591095 6579 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 21:00:20.591244 6579 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:42Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.167142 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a730021c-71f9-4d10-b727-2ca3996f7315\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44c38ce1d45d528816fef389df305927a56ce194d7a4f9008c53ba1c8c3872d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30802e3825dde5e5a0fcaa97b68b101cbf379a89ae031f9e6825ab9d8300ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6374bb4d3110394978d5438ec9d728e7d5a0b926555d6aa0cff5582ff326ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006c62d0e5822ff5be8928565b24d3928faad03a7eed4c1e00cb7f97170723a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://006c62d0e5822ff5be8928565b24d3928faad03a7eed4c1e00cb7f97170723a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:42Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.169348 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.169377 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.169386 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.169401 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.169412 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:42Z","lastTransitionTime":"2026-02-19T21:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.189086 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:42Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.203685 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:42Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.221551 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:42Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.239883 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:42Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.256239 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f72327b-1f93-42b9-be5e-0fe0aee6035f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d8290bf97adfd00491ce8c604b2f3d964f62905337468cbf1ecec35dd961b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8842619bd40d1bf4d78e8a32e327199ff5c0e380c87417ee0d862bbdd459a683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r47dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:42Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.269814 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1876cc4e-c01b-4f98-9d4b-36c0ef5caf78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d23024e5a3956e078a2acd11d0c498d2eec026c002a01d0f24a8e08622b20e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://111bd67d90ee569c1b2e62757957dc4e871121561ff4eb507278cd8cfc637412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111bd67d90ee569c1b2e62757957dc4e871121561ff4eb507278cd8cfc637412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:42Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.271209 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.271297 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.271319 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.271339 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.271356 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:42Z","lastTransitionTime":"2026-02-19T21:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.282058 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:42Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.300232 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:42Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.314150 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:42Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.325608 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:41Z\\\",\\\"message\\\":\\\"2026-02-19T20:59:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68a9f874-630d-4463-b8d9-9af41be5f863\\\\n2026-02-19T20:59:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68a9f874-630d-4463-b8d9-9af41be5f863 to /host/opt/cni/bin/\\\\n2026-02-19T20:59:56Z [verbose] multus-daemon started\\\\n2026-02-19T20:59:56Z [verbose] Readiness Indicator file check\\\\n2026-02-19T21:00:41Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:42Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.337399 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:42Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.348996 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:42Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.361935 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:42Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.371025 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6hp27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6hp27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:42Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.373656 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.373714 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.373732 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.373756 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.373775 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:42Z","lastTransitionTime":"2026-02-19T21:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.379462 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:42Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.388619 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:42Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.475923 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.475964 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.475974 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.475988 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.475997 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:42Z","lastTransitionTime":"2026-02-19T21:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.578555 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.578640 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.578664 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.578693 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.578715 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:42Z","lastTransitionTime":"2026-02-19T21:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.590082 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 17:33:18.481582876 +0000 UTC Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.600672 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.600768 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:42 crc kubenswrapper[4886]: E0219 21:00:42.600813 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.600706 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.600972 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:42 crc kubenswrapper[4886]: E0219 21:00:42.600995 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:42 crc kubenswrapper[4886]: E0219 21:00:42.601023 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:42 crc kubenswrapper[4886]: E0219 21:00:42.601061 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.682803 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.682881 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.682967 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.682997 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.683017 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:42Z","lastTransitionTime":"2026-02-19T21:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.784884 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.784946 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.784966 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.784991 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.785023 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:42Z","lastTransitionTime":"2026-02-19T21:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.887546 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.887605 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.887624 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.887646 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.887664 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:42Z","lastTransitionTime":"2026-02-19T21:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.989935 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.989976 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.989984 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.990000 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:42 crc kubenswrapper[4886]: I0219 21:00:42.990011 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:42Z","lastTransitionTime":"2026-02-19T21:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.092170 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.092217 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.092234 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.092256 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.092296 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:43Z","lastTransitionTime":"2026-02-19T21:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.135695 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rnffz_83f8fca5-68c6-4300-b2d8-64a58bf92a64/kube-multus/0.log" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.135894 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rnffz" event={"ID":"83f8fca5-68c6-4300-b2d8-64a58bf92a64","Type":"ContainerStarted","Data":"0732cf035727cf4d641fbec8677771a7eb77d51b7a2b7e60ee9566b3eb62a0ad"} Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.148192 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6hp27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6hp27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:43Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.160699 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:43Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.174944 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:43Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.194878 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.194909 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.194921 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.194938 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.194951 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:43Z","lastTransitionTime":"2026-02-19T21:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.205439 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:20Z\\\",\\\"message\\\":\\\".go:1336] Added *v1.EgressIP event handler 8\\\\nI0219 21:00:20.590035 6579 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 21:00:20.590171 6579 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 21:00:20.590187 6579 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 21:00:20.590212 6579 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 21:00:20.590223 6579 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 21:00:20.590038 6579 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 21:00:20.590241 6579 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 21:00:20.590253 6579 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 21:00:20.590292 6579 factory.go:656] Stopping watch factory\\\\nI0219 21:00:20.590316 6579 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 21:00:20.590325 6579 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 21:00:20.590762 6579 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0219 21:00:20.590932 6579 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0219 21:00:20.591027 6579 ovnkube.go:599] Stopped ovnkube\\\\nI0219 21:00:20.591095 6579 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 21:00:20.591244 6579 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:43Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.222778 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a730021c-71f9-4d10-b727-2ca3996f7315\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44c38ce1d45d528816fef389df305927a56ce194d7a4f9008c53ba1c8c3872d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30802e3825dde5e5a0fcaa97b68b101cbf379a89ae031f9e6825ab9d8300ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6374bb4d3110394978d5438ec9d728e7d5a0b926555d6aa0cff5582ff326ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006c62d0e5822ff5be8928565b24d3928faad03a7eed4c1e00cb7f97170723a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://006c62d0e5822ff5be8928565b24d3928faad03a7eed4c1e00cb7f97170723a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:43Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.239746 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:43Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.252515 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:43Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.266516 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:43Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.281514 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:43Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.293990 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f72327b-1f93-42b9-be5e-0fe0aee6035f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d8290bf97adfd00491ce8c604b2f3d964f62905337468cbf1ecec35dd961b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8842619bd40d1bf4d78e8a32e327199ff5c0e380c87417ee0d862bbdd459a683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r47dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:43Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.297339 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.297402 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.297454 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.297489 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.297507 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:43Z","lastTransitionTime":"2026-02-19T21:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.305232 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1876cc4e-c01b-4f98-9d4b-36c0ef5caf78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d23024e5a3956e078a2acd11d0c498d2eec026c002a01d0f24a8e08622b20e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://111bd67d90ee569c1b2e62757957dc4e871121561ff4eb507278cd8cfc637412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111bd67d90ee569c1b2e62757957dc4e871121561ff4eb507278cd8cfc637412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:43Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.318717 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:43Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.333354 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:43Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.347922 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:43Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.370668 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0732cf035727cf4d641fbec8677771a7eb77d51b7a2b7e60ee9566b3eb62a0ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:41Z\\\",\\\"message\\\":\\\"2026-02-19T20:59:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68a9f874-630d-4463-b8d9-9af41be5f863\\\\n2026-02-19T20:59:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68a9f874-630d-4463-b8d9-9af41be5f863 to /host/opt/cni/bin/\\\\n2026-02-19T20:59:56Z [verbose] multus-daemon started\\\\n2026-02-19T20:59:56Z [verbose] Readiness Indicator file check\\\\n2026-02-19T21:00:41Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:43Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.388165 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:43Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.399933 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.399979 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.399988 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.400007 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.400019 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:43Z","lastTransitionTime":"2026-02-19T21:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.401055 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:43Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.416710 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:43Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.502377 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.502432 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.502451 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.502474 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.502490 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:43Z","lastTransitionTime":"2026-02-19T21:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.590423 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 08:48:19.077025142 +0000 UTC Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.605447 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.605516 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.605539 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.605566 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.605588 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:43Z","lastTransitionTime":"2026-02-19T21:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.708551 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.708610 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.708747 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.708774 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.708793 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:43Z","lastTransitionTime":"2026-02-19T21:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.811491 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.811537 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.811551 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.811567 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.811582 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:43Z","lastTransitionTime":"2026-02-19T21:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.913677 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.913711 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.913720 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.913733 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:43 crc kubenswrapper[4886]: I0219 21:00:43.913744 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:43Z","lastTransitionTime":"2026-02-19T21:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.016538 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.016583 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.016591 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.016604 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.016615 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:44Z","lastTransitionTime":"2026-02-19T21:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.118928 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.118965 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.118975 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.118991 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.119000 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:44Z","lastTransitionTime":"2026-02-19T21:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.221780 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.221845 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.221862 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.221887 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.221903 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:44Z","lastTransitionTime":"2026-02-19T21:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.323921 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.323983 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.323999 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.324022 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.324039 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:44Z","lastTransitionTime":"2026-02-19T21:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.426129 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.426167 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.426176 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.426192 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.426202 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:44Z","lastTransitionTime":"2026-02-19T21:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.528529 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.528566 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.528575 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.528592 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.528601 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:44Z","lastTransitionTime":"2026-02-19T21:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.591274 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:58:39.443632196 +0000 UTC Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.600651 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.600671 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:44 crc kubenswrapper[4886]: E0219 21:00:44.600807 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.600834 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:44 crc kubenswrapper[4886]: E0219 21:00:44.600928 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:44 crc kubenswrapper[4886]: E0219 21:00:44.600996 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.601127 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:44 crc kubenswrapper[4886]: E0219 21:00:44.601196 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.632029 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.632057 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.632065 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.632077 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.632086 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:44Z","lastTransitionTime":"2026-02-19T21:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.735002 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.735043 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.735054 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.735069 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.735080 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:44Z","lastTransitionTime":"2026-02-19T21:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.838237 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.838314 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.838332 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.838359 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.838376 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:44Z","lastTransitionTime":"2026-02-19T21:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.941327 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.941384 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.941401 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.941425 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:44 crc kubenswrapper[4886]: I0219 21:00:44.941442 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:44Z","lastTransitionTime":"2026-02-19T21:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.040048 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.040104 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.040120 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.040144 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.040160 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:45Z","lastTransitionTime":"2026-02-19T21:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:45 crc kubenswrapper[4886]: E0219 21:00:45.059746 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:45Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.063419 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.063449 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.063459 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.063472 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.063481 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:45Z","lastTransitionTime":"2026-02-19T21:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:45 crc kubenswrapper[4886]: E0219 21:00:45.075179 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:45Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.079697 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.079754 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.079773 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.079796 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.079813 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:45Z","lastTransitionTime":"2026-02-19T21:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:45 crc kubenswrapper[4886]: E0219 21:00:45.091970 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:45Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.096013 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.096065 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.096081 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.096102 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.096114 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:45Z","lastTransitionTime":"2026-02-19T21:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:45 crc kubenswrapper[4886]: E0219 21:00:45.109657 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:45Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.113238 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.113336 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.113384 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.113415 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.113436 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:45Z","lastTransitionTime":"2026-02-19T21:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:45 crc kubenswrapper[4886]: E0219 21:00:45.127590 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:45Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:45 crc kubenswrapper[4886]: E0219 21:00:45.127733 4886 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.129516 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.129550 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.129563 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.129579 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.129590 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:45Z","lastTransitionTime":"2026-02-19T21:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.232153 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.232204 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.232222 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.232247 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.232298 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:45Z","lastTransitionTime":"2026-02-19T21:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.333989 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.334053 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.334070 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.334095 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.334149 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:45Z","lastTransitionTime":"2026-02-19T21:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.436834 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.436896 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.436915 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.436942 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.436960 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:45Z","lastTransitionTime":"2026-02-19T21:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.540207 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.540309 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.540328 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.540356 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.540372 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:45Z","lastTransitionTime":"2026-02-19T21:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.591928 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 12:36:50.348707946 +0000 UTC Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.643301 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.643359 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.643376 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.643398 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.643418 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:45Z","lastTransitionTime":"2026-02-19T21:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.745706 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.745742 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.745751 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.745764 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.745776 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:45Z","lastTransitionTime":"2026-02-19T21:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.847956 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.847986 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.847994 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.848008 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.848018 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:45Z","lastTransitionTime":"2026-02-19T21:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.969958 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.970021 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.970038 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.970061 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:45 crc kubenswrapper[4886]: I0219 21:00:45.970077 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:45Z","lastTransitionTime":"2026-02-19T21:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.073099 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.073154 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.073178 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.073204 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.073226 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:46Z","lastTransitionTime":"2026-02-19T21:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.176211 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.176258 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.176318 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.176347 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.176414 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:46Z","lastTransitionTime":"2026-02-19T21:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.279475 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.279516 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.279532 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.279554 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.279570 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:46Z","lastTransitionTime":"2026-02-19T21:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.383196 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.383303 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.383321 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.383347 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.383364 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:46Z","lastTransitionTime":"2026-02-19T21:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.485592 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.485651 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.485669 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.485693 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.485711 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:46Z","lastTransitionTime":"2026-02-19T21:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.588422 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.588483 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.588501 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.588879 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.588927 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:46Z","lastTransitionTime":"2026-02-19T21:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.592692 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 12:47:35.504950784 +0000 UTC Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.600431 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.600440 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.600583 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:46 crc kubenswrapper[4886]: E0219 21:00:46.600793 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:46 crc kubenswrapper[4886]: E0219 21:00:46.600892 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:46 crc kubenswrapper[4886]: E0219 21:00:46.601545 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.604503 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:46 crc kubenswrapper[4886]: E0219 21:00:46.604621 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.692052 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.692096 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.692114 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.692134 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.692150 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:46Z","lastTransitionTime":"2026-02-19T21:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.794368 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.794404 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.794420 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.794439 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.794455 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:46Z","lastTransitionTime":"2026-02-19T21:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.897296 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.897336 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.897357 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.897377 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.897393 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:46Z","lastTransitionTime":"2026-02-19T21:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.999613 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.999654 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:46 crc kubenswrapper[4886]: I0219 21:00:46.999671 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:46.999691 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:46.999708 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:46Z","lastTransitionTime":"2026-02-19T21:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.102612 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.102657 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.102675 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.102697 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.102714 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:47Z","lastTransitionTime":"2026-02-19T21:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.205781 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.205824 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.205840 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.205861 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.205878 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:47Z","lastTransitionTime":"2026-02-19T21:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.308037 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.308079 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.308094 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.308115 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.308131 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:47Z","lastTransitionTime":"2026-02-19T21:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.411081 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.411125 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.411141 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.411163 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.411180 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:47Z","lastTransitionTime":"2026-02-19T21:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.514342 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.514391 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.514410 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.514436 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.514458 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:47Z","lastTransitionTime":"2026-02-19T21:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.593325 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 14:46:33.709682611 +0000 UTC Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.601809 4886 scope.go:117] "RemoveContainer" containerID="c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.616955 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.616993 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.617001 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.617017 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.617026 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:47Z","lastTransitionTime":"2026-02-19T21:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.720501 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.720633 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.720692 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.720763 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.720823 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:47Z","lastTransitionTime":"2026-02-19T21:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.823089 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.823126 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.823144 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.823168 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.823187 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:47Z","lastTransitionTime":"2026-02-19T21:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.926033 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.926084 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.926096 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.926129 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:47 crc kubenswrapper[4886]: I0219 21:00:47.926143 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:47Z","lastTransitionTime":"2026-02-19T21:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.029146 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.029194 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.029210 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.029232 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.029249 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:48Z","lastTransitionTime":"2026-02-19T21:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.131757 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.131822 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.131859 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.131891 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.131913 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:48Z","lastTransitionTime":"2026-02-19T21:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.154642 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nclwh_87d8f125-379b-4e5a-bedc-b55cf9edb00a/ovnkube-controller/2.log" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.157999 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerStarted","Data":"1d5443a6295b93c79e43e0db5590935fe8e061ba198cd9dc1c9115f431aa8e93"} Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.158632 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.174769 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6hp27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6hp27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:48Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.196097 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:48Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.215494 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:48Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.234356 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.234412 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.234428 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.234449 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.234463 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:48Z","lastTransitionTime":"2026-02-19T21:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.237194 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5443a6295b93c79e43e0db5590935fe8e061ba198cd9dc1c9115f431aa8e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:20Z\\\",\\\"message\\\":\\\".go:1336] Added *v1.EgressIP event handler 8\\\\nI0219 21:00:20.590035 6579 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 21:00:20.590171 6579 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 21:00:20.590187 6579 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 21:00:20.590212 6579 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 21:00:20.590223 6579 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 21:00:20.590038 6579 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 21:00:20.590241 6579 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 21:00:20.590253 6579 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 21:00:20.590292 6579 factory.go:656] Stopping watch factory\\\\nI0219 21:00:20.590316 6579 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 21:00:20.590325 6579 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 21:00:20.590762 6579 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0219 21:00:20.590932 6579 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0219 21:00:20.591027 6579 ovnkube.go:599] Stopped ovnkube\\\\nI0219 21:00:20.591095 6579 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 21:00:20.591244 6579 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:48Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.252965 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a730021c-71f9-4d10-b727-2ca3996f7315\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44c38ce1d45d528816fef389df305927a56ce194d7a4f9008c53ba1c8c3872d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30802e3825dde5e5a0fcaa97b68b101cbf379a89ae031f9e6825ab9d8300ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6374bb4d3110394978d5438ec9d728e7d5a0b926555d6aa0cff5582ff326ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006c62d0e5822ff5be8928565b24d3928faad03a7eed4c1e00cb7f97170723a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://006c62d0e5822ff5be8928565b24d3928faad03a7eed4c1e00cb7f97170723a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:48Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.265894 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:48Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.276485 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:48Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.291533 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:48Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.307479 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:48Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.320799 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f72327b-1f93-42b9-be5e-0fe0aee6035f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d8290bf97adfd00491ce8c604b2f3d964f62905337468cbf1ecec35dd961b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8842619bd40d1bf4d78e8a32e327199ff5c0e380c87417ee0d862bbdd459a683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r47dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:48Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.331189 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1876cc4e-c01b-4f98-9d4b-36c0ef5caf78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d23024e5a3956e078a2acd11d0c498d2eec026c002a01d0f24a8e08622b20e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://111bd67d90ee569c1b2e62757957dc4e871121561ff4eb507278cd8cfc637412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111bd67d90ee569c1b2e62757957dc4e871121561ff4eb507278cd8cfc637412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:48Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.336420 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.336460 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.336471 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.336492 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.336510 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:48Z","lastTransitionTime":"2026-02-19T21:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.366829 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:48Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.387433 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:48Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.415216 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:48Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.429536 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0732cf035727cf4d641fbec8677771a7eb77d51b7a2b7e60ee9566b3eb62a0ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:41Z\\\",\\\"message\\\":\\\"2026-02-19T20:59:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68a9f874-630d-4463-b8d9-9af41be5f863\\\\n2026-02-19T20:59:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68a9f874-630d-4463-b8d9-9af41be5f863 to /host/opt/cni/bin/\\\\n2026-02-19T20:59:56Z [verbose] multus-daemon started\\\\n2026-02-19T20:59:56Z [verbose] Readiness Indicator file check\\\\n2026-02-19T21:00:41Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:48Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.438075 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.438106 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.438114 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.438127 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.438136 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:48Z","lastTransitionTime":"2026-02-19T21:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.440021 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:48Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.449749 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:48Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.460568 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:48Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.540387 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.540413 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.540421 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.540433 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.540441 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:48Z","lastTransitionTime":"2026-02-19T21:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.593611 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 16:28:57.573350201 +0000 UTC Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.601524 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.601541 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:48 crc kubenswrapper[4886]: E0219 21:00:48.601715 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:48 crc kubenswrapper[4886]: E0219 21:00:48.601750 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.601483 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.601651 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:48 crc kubenswrapper[4886]: E0219 21:00:48.601839 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:48 crc kubenswrapper[4886]: E0219 21:00:48.601886 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.643060 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.643112 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.643130 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.643151 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.643168 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:48Z","lastTransitionTime":"2026-02-19T21:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.745877 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.745916 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.745928 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.745944 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.745956 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:48Z","lastTransitionTime":"2026-02-19T21:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.848551 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.848590 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.848598 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.848613 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.848622 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:48Z","lastTransitionTime":"2026-02-19T21:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.951358 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.951438 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.951463 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.951494 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:48 crc kubenswrapper[4886]: I0219 21:00:48.951518 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:48Z","lastTransitionTime":"2026-02-19T21:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.054459 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.054542 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.054566 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.054598 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.054621 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:49Z","lastTransitionTime":"2026-02-19T21:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.156913 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.156973 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.156993 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.157016 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.157031 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:49Z","lastTransitionTime":"2026-02-19T21:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.161796 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nclwh_87d8f125-379b-4e5a-bedc-b55cf9edb00a/ovnkube-controller/3.log" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.162655 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nclwh_87d8f125-379b-4e5a-bedc-b55cf9edb00a/ovnkube-controller/2.log" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.165728 4886 generic.go:334] "Generic (PLEG): container finished" podID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerID="1d5443a6295b93c79e43e0db5590935fe8e061ba198cd9dc1c9115f431aa8e93" exitCode=1 Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.165766 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerDied","Data":"1d5443a6295b93c79e43e0db5590935fe8e061ba198cd9dc1c9115f431aa8e93"} Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.165795 4886 scope.go:117] "RemoveContainer" containerID="c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.166966 4886 scope.go:117] "RemoveContainer" containerID="1d5443a6295b93c79e43e0db5590935fe8e061ba198cd9dc1c9115f431aa8e93" Feb 19 21:00:49 crc kubenswrapper[4886]: E0219 21:00:49.167316 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.185008 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:49Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.201167 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6hp27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6hp27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:49Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.217121 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:49Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.230471 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:49Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.259762 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.259829 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.259852 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.259880 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.259900 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:49Z","lastTransitionTime":"2026-02-19T21:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.260674 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5443a6295b93c79e43e0db5590935fe8e061ba198cd9dc1c9115f431aa8e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0701f848e23a34d04b8eb5ac75a11714795bbbc3bc1b347d11dd8fb8468007e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:20Z\\\",\\\"message\\\":\\\".go:1336] Added *v1.EgressIP event handler 8\\\\nI0219 21:00:20.590035 6579 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 21:00:20.590171 6579 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 21:00:20.590187 6579 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 21:00:20.590212 6579 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 21:00:20.590223 6579 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 21:00:20.590038 6579 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 21:00:20.590241 6579 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 21:00:20.590253 6579 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 21:00:20.590292 6579 factory.go:656] Stopping watch factory\\\\nI0219 21:00:20.590316 6579 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 21:00:20.590325 6579 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 21:00:20.590762 6579 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0219 21:00:20.590932 6579 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0219 21:00:20.591027 6579 ovnkube.go:599] Stopped ovnkube\\\\nI0219 21:00:20.591095 6579 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 21:00:20.591244 6579 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d5443a6295b93c79e43e0db5590935fe8e061ba198cd9dc1c9115f431aa8e93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:48Z\\\",\\\"message\\\":\\\"gistry/node-ca-lngzl in Admin Network Policy controller\\\\nF0219 21:00:48.501006 6964 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:48Z is after 2025-08-24T17:21:41Z]\\\\nI0219 21:00:48.501014 6964 admin_network_policy_pod.go:59] Finished syncing Pod openshift-image-registry/node-ca-lngzl Admin Network Policy controller: took 6.111µs\\\\nI0219 21:00:48.501061 6964 admin_network_policy_pod.go:56] Processing sync for Pod openshift-multus/multus-rnffz in Admin Network Policy controller\\\\nI0219 21:00:48.501067 6964 admin_network_policy_pod.go:59] Finished syncing Pod openshift-multus/multus-rnffz Admin Network Policy controller: took 6.68µs\\\\nI0219 21:00:48.501074 6964 admin_network_policy_pod.go:56] Processing sync for Pod openshift-network-di\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:49Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.277957 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a730021c-71f9-4d10-b727-2ca3996f7315\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44c38ce1d45d528816fef389df305927a56ce194d7a4f9008c53ba1c8c3872d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30802e3825dde5e5a0fcaa97b68b101cbf379a89ae031f9e6825ab9d8300ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6374bb4d3110394978d5438ec9d728e7d5a0b926555d6aa0cff5582ff326ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006c62d0e5822ff5be8928565b24d3928faad03a7eed4c1e00cb7f97170723a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://006c62d0e5822ff5be8928565b24d3928faad03a7eed4c1e00cb7f97170723a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:49Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.299024 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:49Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.317974 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:49Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.329605 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:49Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.343304 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:49Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.356386 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f72327b-1f93-42b9-be5e-0fe0aee6035f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d8290bf97adfd00491ce8c604b2f3d964f62905337468cbf1ecec35dd961b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8842619bd40d1bf4d78e8a32e327199ff5c0e380c87417ee0d862bbdd459a683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r47dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:49Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.363677 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.363746 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.363764 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.363787 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.363804 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:49Z","lastTransitionTime":"2026-02-19T21:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.369651 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1876cc4e-c01b-4f98-9d4b-36c0ef5caf78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d23024e5a3956e078a2acd11d0c498d2eec026c002a01d0f24a8e08622b20e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://111bd67d90ee569c1b2e62757957dc4e871121561ff4eb507278cd8cfc637412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111bd67d90ee569c1b2e62757957dc4e871121561ff4eb507278cd8cfc637412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:49Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.383634 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:49Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.397912 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:49Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.412609 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:49Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.424443 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0732cf035727cf4d641fbec8677771a7eb77d51b7a2b7e60ee9566b3eb62a0ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:41Z\\\",\\\"message\\\":\\\"2026-02-19T20:59:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68a9f874-630d-4463-b8d9-9af41be5f863\\\\n2026-02-19T20:59:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68a9f874-630d-4463-b8d9-9af41be5f863 to /host/opt/cni/bin/\\\\n2026-02-19T20:59:56Z [verbose] multus-daemon started\\\\n2026-02-19T20:59:56Z [verbose] Readiness Indicator file check\\\\n2026-02-19T21:00:41Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:49Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.443772 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:49Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.465741 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:49Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.466139 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.466192 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.466203 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.466218 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.466227 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:49Z","lastTransitionTime":"2026-02-19T21:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.569727 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.569802 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.569829 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.569866 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.569890 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:49Z","lastTransitionTime":"2026-02-19T21:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.594326 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 21:16:09.13896163 +0000 UTC Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.672555 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.672660 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.672680 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.672701 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.672718 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:49Z","lastTransitionTime":"2026-02-19T21:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.775253 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.775361 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.775394 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.775419 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.775436 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:49Z","lastTransitionTime":"2026-02-19T21:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.878183 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.878241 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.878303 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.878331 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.878351 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:49Z","lastTransitionTime":"2026-02-19T21:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.981495 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.985751 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.985795 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.985925 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:49 crc kubenswrapper[4886]: I0219 21:00:49.985957 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:49Z","lastTransitionTime":"2026-02-19T21:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.088754 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.088807 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.088827 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.088853 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.088874 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:50Z","lastTransitionTime":"2026-02-19T21:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.171758 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nclwh_87d8f125-379b-4e5a-bedc-b55cf9edb00a/ovnkube-controller/3.log" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.176586 4886 scope.go:117] "RemoveContainer" containerID="1d5443a6295b93c79e43e0db5590935fe8e061ba198cd9dc1c9115f431aa8e93" Feb 19 21:00:50 crc kubenswrapper[4886]: E0219 21:00:50.176867 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.191031 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.191086 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.191105 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.191127 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.191144 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:50Z","lastTransitionTime":"2026-02-19T21:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.198893 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.217718 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.236897 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.254689 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.274116 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0732cf035727cf4d641fbec8677771a7eb77d51b7a2b7e60ee9566b3eb62a0ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:41Z\\\",\\\"message\\\":\\\"2026-02-19T20:59:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68a9f874-630d-4463-b8d9-9af41be5f863\\\\n2026-02-19T20:59:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68a9f874-630d-4463-b8d9-9af41be5f863 to /host/opt/cni/bin/\\\\n2026-02-19T20:59:56Z [verbose] multus-daemon started\\\\n2026-02-19T20:59:56Z [verbose] Readiness Indicator file check\\\\n2026-02-19T21:00:41Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.289169 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.294758 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.294807 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.294824 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.294848 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.294866 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:50Z","lastTransitionTime":"2026-02-19T21:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.306910 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.322403 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6hp27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6hp27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.338984 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a730021c-71f9-4d10-b727-2ca3996f7315\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44c38ce1d45d528816fef389df305927a56ce194d7a4f9008c53ba1c8c3872d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30802e3825dde5e5a0fcaa97b68b101cbf379a89ae031f9e6825ab9d8300ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6374bb4d3110394978d5438ec9d728e7d5a0b926555d6aa0cff5582ff326ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006c62d0e5822ff5be8928565b24d3928faad03a7eed4c1e00cb7f97170723a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://006c62d0e5822ff5be8928565b24d3928faad03a7eed4c1e00cb7f97170723a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.356615 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.370857 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.398428 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.398516 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.398537 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.398568 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.398593 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:50Z","lastTransitionTime":"2026-02-19T21:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.409012 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5443a6295b93c79e43e0db5590935fe8e061ba198cd9dc1c9115f431aa8e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d5443a6295b93c79e43e0db5590935fe8e061ba198cd9dc1c9115f431aa8e93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:48Z\\\",\\\"message\\\":\\\"gistry/node-ca-lngzl in Admin Network Policy controller\\\\nF0219 21:00:48.501006 6964 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:48Z is after 2025-08-24T17:21:41Z]\\\\nI0219 21:00:48.501014 6964 admin_network_policy_pod.go:59] Finished syncing Pod openshift-image-registry/node-ca-lngzl Admin Network Policy controller: took 6.111µs\\\\nI0219 21:00:48.501061 6964 admin_network_policy_pod.go:56] Processing sync for Pod openshift-multus/multus-rnffz in Admin Network Policy controller\\\\nI0219 21:00:48.501067 6964 admin_network_policy_pod.go:59] Finished syncing Pod openshift-multus/multus-rnffz Admin Network Policy controller: took 6.68µs\\\\nI0219 21:00:48.501074 6964 admin_network_policy_pod.go:56] Processing sync for Pod openshift-network-di\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.424725 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1876cc4e-c01b-4f98-9d4b-36c0ef5caf78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d23024e5a3956e078a2acd11d0c498d2eec026c002a01d0f24a8e08622b20e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://111bd67d90ee569c1b2e62757957dc4e871121561ff4eb507278cd8cfc637412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111bd67d90ee569c1b2e62757957dc4e871121561ff4eb507278cd8cfc637412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.444439 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.464057 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.485916 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.501186 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.501239 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.501285 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.501309 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.501327 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:50Z","lastTransitionTime":"2026-02-19T21:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.507459 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.525795 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f72327b-1f93-42b9-be5e-0fe0aee6035f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d8290bf97adfd00491ce8c604b2f3d964f62905337468cbf1ecec35dd961b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8842619bd40d1bf4d78e8a32e327199ff5c0e380c87417ee0d862bbdd459a683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r47dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.594636 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 10:18:12.044510387 +0000 UTC Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.601127 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.601168 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.601324 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:50 crc kubenswrapper[4886]: E0219 21:00:50.601436 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.601576 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:50 crc kubenswrapper[4886]: E0219 21:00:50.601756 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:50 crc kubenswrapper[4886]: E0219 21:00:50.601935 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:50 crc kubenswrapper[4886]: E0219 21:00:50.602194 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.603601 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.603654 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.603671 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.603694 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.603711 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:50Z","lastTransitionTime":"2026-02-19T21:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.620671 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.634919 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.649718 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6hp27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6hp27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.663784 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a730021c-71f9-4d10-b727-2ca3996f7315\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44c38ce1d45d528816fef389df305927a56ce194d7a4f9008c53ba1c8c3872d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30802e3825dde5e5a0fcaa97b68b101cbf379a89ae031f9e6825ab9d8300ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6374bb4d3110394978d5438ec9d728e7d5a0b926555d6aa0cff5582ff326ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006c62d0e5822ff5be8928565b24d3928faad03a7eed4c1e00cb7f97170723a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://006c62d0e5822ff5be8928565b24d3928faad03a7eed4c1e00cb7f97170723a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.681207 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.694977 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.706401 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.706438 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.706449 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.706465 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.706477 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:50Z","lastTransitionTime":"2026-02-19T21:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.726794 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5443a6295b93c79e43e0db5590935fe8e061ba198cd9dc1c9115f431aa8e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d5443a6295b93c79e43e0db5590935fe8e061ba198cd9dc1c9115f431aa8e93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:48Z\\\",\\\"message\\\":\\\"gistry/node-ca-lngzl in Admin Network Policy controller\\\\nF0219 21:00:48.501006 6964 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:48Z is after 2025-08-24T17:21:41Z]\\\\nI0219 21:00:48.501014 6964 admin_network_policy_pod.go:59] Finished syncing Pod openshift-image-registry/node-ca-lngzl Admin Network Policy controller: took 6.111µs\\\\nI0219 21:00:48.501061 6964 admin_network_policy_pod.go:56] Processing sync for Pod openshift-multus/multus-rnffz in Admin Network Policy controller\\\\nI0219 21:00:48.501067 6964 admin_network_policy_pod.go:59] Finished syncing Pod openshift-multus/multus-rnffz Admin Network Policy controller: took 6.68µs\\\\nI0219 21:00:48.501074 6964 admin_network_policy_pod.go:56] Processing sync for Pod openshift-network-di\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.746399 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1876cc4e-c01b-4f98-9d4b-36c0ef5caf78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d23024e5a3956e078a2acd11d0c498d2eec026c002a01d0f24a8e08622b20e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://111bd67d90ee569c1b2e62757957dc4e871121561ff4eb507278cd8cfc637412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111bd67d90ee569c1b2e62757957dc4e871121561ff4eb507278cd8cfc637412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.767000 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.786898 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.804429 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.809727 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.809783 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.809801 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.809825 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.809841 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:50Z","lastTransitionTime":"2026-02-19T21:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.828488 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.845626 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f72327b-1f93-42b9-be5e-0fe0aee6035f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d8290bf97adfd00491ce8c604b2f3d964f62905337468cbf1ecec35dd961b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8842619bd40d1bf4d78e8a32e327199ff5c0e380c87417ee0d862bbdd459a683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r47dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.866378 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.886165 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.900782 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.913142 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.913256 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.913310 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.913334 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.913354 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:50Z","lastTransitionTime":"2026-02-19T21:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.919898 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:50 crc kubenswrapper[4886]: I0219 21:00:50.935317 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0732cf035727cf4d641fbec8677771a7eb77d51b7a2b7e60ee9566b3eb62a0ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:41Z\\\",\\\"message\\\":\\\"2026-02-19T20:59:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68a9f874-630d-4463-b8d9-9af41be5f863\\\\n2026-02-19T20:59:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68a9f874-630d-4463-b8d9-9af41be5f863 to /host/opt/cni/bin/\\\\n2026-02-19T20:59:56Z [verbose] multus-daemon started\\\\n2026-02-19T20:59:56Z [verbose] Readiness Indicator file check\\\\n2026-02-19T21:00:41Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:50Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.016391 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.016718 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.016737 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.016764 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.016781 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:51Z","lastTransitionTime":"2026-02-19T21:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.119767 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.119815 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.119831 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.119864 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.119882 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:51Z","lastTransitionTime":"2026-02-19T21:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.222189 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.222241 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.222375 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.222430 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.222452 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:51Z","lastTransitionTime":"2026-02-19T21:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.328411 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.328470 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.328488 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.328522 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.328547 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:51Z","lastTransitionTime":"2026-02-19T21:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.432227 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.432330 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.432348 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.432370 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.432387 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:51Z","lastTransitionTime":"2026-02-19T21:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.535799 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.535859 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.535875 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.535899 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.535919 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:51Z","lastTransitionTime":"2026-02-19T21:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.596136 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 07:28:00.002938159 +0000 UTC Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.638968 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.639024 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.639042 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.639065 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.639082 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:51Z","lastTransitionTime":"2026-02-19T21:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.741588 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.741653 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.741671 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.741698 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.741716 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:51Z","lastTransitionTime":"2026-02-19T21:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.845217 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.845304 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.845322 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.845344 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.845362 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:51Z","lastTransitionTime":"2026-02-19T21:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.948549 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.948613 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.948631 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.948655 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:51 crc kubenswrapper[4886]: I0219 21:00:51.948672 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:51Z","lastTransitionTime":"2026-02-19T21:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.052427 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.052501 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.052518 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.052543 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.052562 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:52Z","lastTransitionTime":"2026-02-19T21:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.155383 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.155446 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.155464 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.155488 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.155508 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:52Z","lastTransitionTime":"2026-02-19T21:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.258632 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.258686 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.258703 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.258726 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.258743 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:52Z","lastTransitionTime":"2026-02-19T21:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.362144 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.362241 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.362259 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.362417 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.362446 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:52Z","lastTransitionTime":"2026-02-19T21:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.465234 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.465342 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.465367 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.465402 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.465423 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:52Z","lastTransitionTime":"2026-02-19T21:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.568312 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.568380 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.568403 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.568433 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.568455 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:52Z","lastTransitionTime":"2026-02-19T21:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.596904 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 00:41:40.718966207 +0000 UTC Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.600351 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.600413 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.600506 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:52 crc kubenswrapper[4886]: E0219 21:00:52.600712 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.600750 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:52 crc kubenswrapper[4886]: E0219 21:00:52.600883 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:52 crc kubenswrapper[4886]: E0219 21:00:52.600976 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:52 crc kubenswrapper[4886]: E0219 21:00:52.601104 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.671522 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.671579 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.671602 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.671630 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.671652 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:52Z","lastTransitionTime":"2026-02-19T21:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.774860 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.774937 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.774954 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.774979 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.774997 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:52Z","lastTransitionTime":"2026-02-19T21:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.877700 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.877737 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.877748 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.877767 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.877779 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:52Z","lastTransitionTime":"2026-02-19T21:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.981614 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.981753 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.981785 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.981817 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:52 crc kubenswrapper[4886]: I0219 21:00:52.981867 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:52Z","lastTransitionTime":"2026-02-19T21:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.084730 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.084834 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.084854 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.084913 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.084933 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:53Z","lastTransitionTime":"2026-02-19T21:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.187409 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.187462 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.187481 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.187506 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.187522 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:53Z","lastTransitionTime":"2026-02-19T21:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.291103 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.291190 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.291215 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.291243 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.291296 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:53Z","lastTransitionTime":"2026-02-19T21:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.393777 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.393846 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.393859 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.393875 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.393886 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:53Z","lastTransitionTime":"2026-02-19T21:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.497203 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.497316 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.497343 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.497372 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.497393 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:53Z","lastTransitionTime":"2026-02-19T21:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.597091 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 20:04:42.761050831 +0000 UTC Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.600498 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.600553 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.600570 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.600592 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.600608 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:53Z","lastTransitionTime":"2026-02-19T21:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.703563 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.703646 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.703672 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.703703 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.703726 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:53Z","lastTransitionTime":"2026-02-19T21:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.807105 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.807203 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.807223 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.807247 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.807299 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:53Z","lastTransitionTime":"2026-02-19T21:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.909606 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.909663 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.909681 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.909705 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:53 crc kubenswrapper[4886]: I0219 21:00:53.909722 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:53Z","lastTransitionTime":"2026-02-19T21:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.012433 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.012477 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.012489 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.012505 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.012516 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:54Z","lastTransitionTime":"2026-02-19T21:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.116122 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.116162 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.116173 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.116189 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.116201 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:54Z","lastTransitionTime":"2026-02-19T21:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.220108 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.220164 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.220180 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.220203 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.220221 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:54Z","lastTransitionTime":"2026-02-19T21:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.323189 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.323243 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.323255 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.323293 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.323305 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:54Z","lastTransitionTime":"2026-02-19T21:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.426527 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.426611 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.426630 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.426655 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.426673 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:54Z","lastTransitionTime":"2026-02-19T21:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.529207 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.529297 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.529322 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.529350 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.529371 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:54Z","lastTransitionTime":"2026-02-19T21:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.597597 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 23:19:44.409541921 +0000 UTC Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.601034 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:54 crc kubenswrapper[4886]: E0219 21:00:54.601173 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.601439 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.601462 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:54 crc kubenswrapper[4886]: E0219 21:00:54.601555 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.601661 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:54 crc kubenswrapper[4886]: E0219 21:00:54.601784 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:54 crc kubenswrapper[4886]: E0219 21:00:54.601893 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.631914 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.631980 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.632002 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.632030 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.632051 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:54Z","lastTransitionTime":"2026-02-19T21:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.734702 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.734748 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.734758 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.734777 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.734789 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:54Z","lastTransitionTime":"2026-02-19T21:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.837559 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.837609 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.837630 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.837654 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.837673 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:54Z","lastTransitionTime":"2026-02-19T21:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.941095 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.941168 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.941193 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.941222 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:54 crc kubenswrapper[4886]: I0219 21:00:54.941247 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:54Z","lastTransitionTime":"2026-02-19T21:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.044546 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.044627 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.044651 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.044681 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.044703 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:55Z","lastTransitionTime":"2026-02-19T21:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.148060 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.148125 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.148147 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.148179 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.148203 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:55Z","lastTransitionTime":"2026-02-19T21:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.251257 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.251401 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.251418 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.251441 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.251460 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:55Z","lastTransitionTime":"2026-02-19T21:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.354318 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.354390 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.354407 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.354435 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.354452 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:55Z","lastTransitionTime":"2026-02-19T21:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.422932 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.423009 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.423027 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.423052 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.423071 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:55Z","lastTransitionTime":"2026-02-19T21:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:55 crc kubenswrapper[4886]: E0219 21:00:55.444318 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:55Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.450136 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.450195 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.450219 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.450249 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.450321 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:55Z","lastTransitionTime":"2026-02-19T21:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:55 crc kubenswrapper[4886]: E0219 21:00:55.466435 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:55Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.469452 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.469520 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.469539 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.469563 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.469583 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:55Z","lastTransitionTime":"2026-02-19T21:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:55 crc kubenswrapper[4886]: E0219 21:00:55.486484 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:55Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.490049 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.490075 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.490082 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.490095 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.490104 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:55Z","lastTransitionTime":"2026-02-19T21:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:55 crc kubenswrapper[4886]: E0219 21:00:55.505323 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:55Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.509777 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.509825 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.509838 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.509851 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.509861 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:55Z","lastTransitionTime":"2026-02-19T21:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:55 crc kubenswrapper[4886]: E0219 21:00:55.529026 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:55Z is after 2025-08-24T17:21:41Z" Feb 19 21:00:55 crc kubenswrapper[4886]: E0219 21:00:55.529196 4886 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.530997 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.531062 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.531084 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.531112 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.531129 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:55Z","lastTransitionTime":"2026-02-19T21:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.598629 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 20:19:00.518425717 +0000 UTC Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.633612 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.633671 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.633683 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.633701 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.633715 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:55Z","lastTransitionTime":"2026-02-19T21:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.736708 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.736817 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.736838 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.736901 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.736929 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:55Z","lastTransitionTime":"2026-02-19T21:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.840573 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.840639 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.840656 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.840680 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.840699 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:55Z","lastTransitionTime":"2026-02-19T21:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.943494 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.943905 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.944121 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.944355 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:55 crc kubenswrapper[4886]: I0219 21:00:55.944521 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:55Z","lastTransitionTime":"2026-02-19T21:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.047179 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.047661 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.047921 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.048139 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.048327 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:56Z","lastTransitionTime":"2026-02-19T21:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.151874 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.152296 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.152474 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.152670 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.152852 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:56Z","lastTransitionTime":"2026-02-19T21:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.255104 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.255192 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.255210 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.255237 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.255256 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:56Z","lastTransitionTime":"2026-02-19T21:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.358313 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.358366 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.358384 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.358407 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.358424 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:56Z","lastTransitionTime":"2026-02-19T21:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.461306 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.461859 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.462056 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.462313 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.462653 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:56Z","lastTransitionTime":"2026-02-19T21:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.535878 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.536079 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.536167 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:56 crc kubenswrapper[4886]: E0219 21:00:56.536367 4886 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 21:00:56 crc kubenswrapper[4886]: E0219 21:00:56.536443 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 21:02:00.536421583 +0000 UTC m=+151.164264673 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 21:00:56 crc kubenswrapper[4886]: E0219 21:00:56.536613 4886 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 21:00:56 crc kubenswrapper[4886]: E0219 21:00:56.536665 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:02:00.536624897 +0000 UTC m=+151.164467977 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:00:56 crc kubenswrapper[4886]: E0219 21:00:56.536745 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 21:02:00.53672005 +0000 UTC m=+151.164563180 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.566138 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.566199 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.566215 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.566242 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.566259 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:56Z","lastTransitionTime":"2026-02-19T21:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.599341 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 06:19:12.88888753 +0000 UTC Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.600736 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.600780 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.600895 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:56 crc kubenswrapper[4886]: E0219 21:00:56.601064 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.601088 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:56 crc kubenswrapper[4886]: E0219 21:00:56.601248 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:56 crc kubenswrapper[4886]: E0219 21:00:56.601434 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:56 crc kubenswrapper[4886]: E0219 21:00:56.601639 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.637240 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.637496 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:56 crc kubenswrapper[4886]: E0219 21:00:56.637569 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 21:00:56 crc kubenswrapper[4886]: E0219 21:00:56.637630 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 21:00:56 crc kubenswrapper[4886]: E0219 21:00:56.637662 4886 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 21:00:56 crc kubenswrapper[4886]: E0219 21:00:56.637723 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 21:00:56 crc kubenswrapper[4886]: E0219 21:00:56.637769 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 21:00:56 crc kubenswrapper[4886]: E0219 21:00:56.637783 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 21:02:00.637749223 +0000 UTC m=+151.265592323 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 21:00:56 crc kubenswrapper[4886]: E0219 21:00:56.637789 4886 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 21:00:56 crc kubenswrapper[4886]: E0219 21:00:56.637871 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 21:02:00.637848896 +0000 UTC m=+151.265691976 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.669123 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.669191 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.669209 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.669232 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.669249 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:56Z","lastTransitionTime":"2026-02-19T21:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.772246 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.772353 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.772370 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.772394 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.772412 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:56Z","lastTransitionTime":"2026-02-19T21:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.874754 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.874806 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.874820 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.874846 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.874861 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:56Z","lastTransitionTime":"2026-02-19T21:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.978513 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.978543 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.978551 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.978563 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:56 crc kubenswrapper[4886]: I0219 21:00:56.978570 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:56Z","lastTransitionTime":"2026-02-19T21:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.081752 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.081815 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.081832 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.081860 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.081877 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:57Z","lastTransitionTime":"2026-02-19T21:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.185244 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.185375 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.185401 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.185437 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.185464 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:57Z","lastTransitionTime":"2026-02-19T21:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.289071 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.289127 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.289145 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.289170 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.289191 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:57Z","lastTransitionTime":"2026-02-19T21:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.392545 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.392775 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.392787 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.392803 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.392816 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:57Z","lastTransitionTime":"2026-02-19T21:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.495662 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.495699 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.495710 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.495725 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.495734 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:57Z","lastTransitionTime":"2026-02-19T21:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.598921 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.598980 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.598999 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.599027 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.599048 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:57Z","lastTransitionTime":"2026-02-19T21:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.599553 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 01:14:44.067558675 +0000 UTC Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.701464 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.701532 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.701552 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.701578 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.701596 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:57Z","lastTransitionTime":"2026-02-19T21:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.804584 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.804820 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.804831 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.804849 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.804861 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:57Z","lastTransitionTime":"2026-02-19T21:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.907646 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.907701 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.907717 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.907738 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:57 crc kubenswrapper[4886]: I0219 21:00:57.907754 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:57Z","lastTransitionTime":"2026-02-19T21:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.011002 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.011047 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.011063 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.011086 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.011104 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:58Z","lastTransitionTime":"2026-02-19T21:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.114127 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.114203 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.114229 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.114294 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.114317 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:58Z","lastTransitionTime":"2026-02-19T21:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.217045 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.217100 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.217117 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.217153 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.217170 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:58Z","lastTransitionTime":"2026-02-19T21:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.320847 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.320905 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.320961 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.320991 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.321012 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:58Z","lastTransitionTime":"2026-02-19T21:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.424657 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.424733 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.424754 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.424778 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.424796 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:58Z","lastTransitionTime":"2026-02-19T21:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.527636 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.527700 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.527716 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.527738 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.527756 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:58Z","lastTransitionTime":"2026-02-19T21:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.600680 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 15:41:31.713630333 +0000 UTC Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.600854 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.600901 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:00:58 crc kubenswrapper[4886]: E0219 21:00:58.601006 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.601054 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.601117 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:00:58 crc kubenswrapper[4886]: E0219 21:00:58.601314 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:00:58 crc kubenswrapper[4886]: E0219 21:00:58.601456 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:00:58 crc kubenswrapper[4886]: E0219 21:00:58.601717 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.631153 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.631321 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.631351 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.631405 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.631427 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:58Z","lastTransitionTime":"2026-02-19T21:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.735039 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.735160 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.735184 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.735217 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.735240 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:58Z","lastTransitionTime":"2026-02-19T21:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.841168 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.841335 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.841363 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.841393 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.841413 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:58Z","lastTransitionTime":"2026-02-19T21:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.945141 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.945215 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.945232 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.945335 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:58 crc kubenswrapper[4886]: I0219 21:00:58.945356 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:58Z","lastTransitionTime":"2026-02-19T21:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.051421 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.051489 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.051506 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.051531 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.051548 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:59Z","lastTransitionTime":"2026-02-19T21:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.154747 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.154800 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.154817 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.154841 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.154910 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:59Z","lastTransitionTime":"2026-02-19T21:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.258540 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.258607 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.258624 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.258648 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.258666 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:59Z","lastTransitionTime":"2026-02-19T21:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.361982 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.362077 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.362097 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.362129 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.362156 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:59Z","lastTransitionTime":"2026-02-19T21:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.465693 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.465745 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.465801 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.465853 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.465870 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:59Z","lastTransitionTime":"2026-02-19T21:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.569850 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.569918 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.569936 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.569960 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.569977 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:59Z","lastTransitionTime":"2026-02-19T21:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.601574 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 12:55:25.436826981 +0000 UTC Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.673063 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.673131 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.673155 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.673184 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.673207 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:59Z","lastTransitionTime":"2026-02-19T21:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.776389 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.776478 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.776500 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.776524 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.776543 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:59Z","lastTransitionTime":"2026-02-19T21:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.879001 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.879101 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.879123 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.879147 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.879166 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:59Z","lastTransitionTime":"2026-02-19T21:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.982163 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.982239 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.982312 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.982349 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:00:59 crc kubenswrapper[4886]: I0219 21:00:59.982373 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:00:59Z","lastTransitionTime":"2026-02-19T21:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.085844 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.085907 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.085930 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.085960 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.085982 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:00Z","lastTransitionTime":"2026-02-19T21:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.188896 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.188971 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.188988 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.189014 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.189032 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:00Z","lastTransitionTime":"2026-02-19T21:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.291330 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.291395 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.291412 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.291438 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.291456 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:00Z","lastTransitionTime":"2026-02-19T21:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.395035 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.395121 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.395139 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.395195 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.395214 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:00Z","lastTransitionTime":"2026-02-19T21:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.498903 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.498976 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.499002 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.499033 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.499125 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:00Z","lastTransitionTime":"2026-02-19T21:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.600156 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.600151 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.600302 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:00 crc kubenswrapper[4886]: E0219 21:01:00.600411 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.600461 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:01:00 crc kubenswrapper[4886]: E0219 21:01:00.600648 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:01:00 crc kubenswrapper[4886]: E0219 21:01:00.600787 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:01:00 crc kubenswrapper[4886]: E0219 21:01:00.600915 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.601699 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 10:38:28.419639018 +0000 UTC Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.602072 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.602130 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.602153 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.602180 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.602201 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:00Z","lastTransitionTime":"2026-02-19T21:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.624842 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.643987 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.661553 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.680087 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.701326 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0732cf035727cf4d641fbec8677771a7eb77d51b7a2b7e60ee9566b3eb62a0ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:41Z\\\",\\\"message\\\":\\\"2026-02-19T20:59:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68a9f874-630d-4463-b8d9-9af41be5f863\\\\n2026-02-19T20:59:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68a9f874-630d-4463-b8d9-9af41be5f863 to /host/opt/cni/bin/\\\\n2026-02-19T20:59:56Z [verbose] multus-daemon started\\\\n2026-02-19T20:59:56Z [verbose] Readiness Indicator file check\\\\n2026-02-19T21:00:41Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.706739 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.706790 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.706807 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.706830 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.706851 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:00Z","lastTransitionTime":"2026-02-19T21:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.721564 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.740401 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.757486 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6hp27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6hp27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.775339 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a730021c-71f9-4d10-b727-2ca3996f7315\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44c38ce1d45d528816fef389df305927a56ce194d7a4f9008c53ba1c8c3872d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30802e3825dde5e5a0fcaa97b68b101cbf379a89ae031f9e6825ab9d8300ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6374bb4d3110394978d5438ec9d728e7d5a0b926555d6aa0cff5582ff326ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006c62d0e5822ff5be8928565b24d3928faad03a7eed4c1e00cb7f97170723a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://006c62d0e5822ff5be8928565b24d3928faad03a7eed4c1e00cb7f97170723a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.795304 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.809607 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.809686 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.809707 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.809733 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.809751 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:00Z","lastTransitionTime":"2026-02-19T21:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.812198 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.843428 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5443a6295b93c79e43e0db5590935fe8e061ba198cd9dc1c9115f431aa8e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d5443a6295b93c79e43e0db5590935fe8e061ba198cd9dc1c9115f431aa8e93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:48Z\\\",\\\"message\\\":\\\"gistry/node-ca-lngzl in Admin Network Policy controller\\\\nF0219 21:00:48.501006 6964 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:48Z is after 2025-08-24T17:21:41Z]\\\\nI0219 21:00:48.501014 6964 admin_network_policy_pod.go:59] Finished syncing Pod openshift-image-registry/node-ca-lngzl Admin Network Policy controller: took 6.111µs\\\\nI0219 21:00:48.501061 6964 admin_network_policy_pod.go:56] Processing sync for Pod openshift-multus/multus-rnffz in Admin Network Policy controller\\\\nI0219 21:00:48.501067 6964 admin_network_policy_pod.go:59] Finished syncing Pod openshift-multus/multus-rnffz Admin Network Policy controller: took 6.68µs\\\\nI0219 21:00:48.501074 6964 admin_network_policy_pod.go:56] Processing sync for Pod openshift-network-di\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.858899 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1876cc4e-c01b-4f98-9d4b-36c0ef5caf78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d23024e5a3956e078a2acd11d0c498d2eec026c002a01d0f24a8e08622b20e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://111bd67d90ee569c1b2e62757957dc4e871121561ff4eb507278cd8cfc637412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111bd67d90ee569c1b2e62757957dc4e871121561ff4eb507278cd8cfc637412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.881341 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.902009 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.912786 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.912852 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.912870 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.912897 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.912915 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:00Z","lastTransitionTime":"2026-02-19T21:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.923120 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.945772 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:00 crc kubenswrapper[4886]: I0219 21:01:00.962947 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f72327b-1f93-42b9-be5e-0fe0aee6035f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d8290bf97adfd00491ce8c604b2f3d964f62905337468cbf1ecec35dd961b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8842619bd40d1bf4d78e8a32e327199ff5c0e380c87417ee0d862bbdd459a683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r47dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:00Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.015900 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.015954 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.015973 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.015999 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.016017 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:01Z","lastTransitionTime":"2026-02-19T21:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.118235 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.118335 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.118353 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.118376 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.118394 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:01Z","lastTransitionTime":"2026-02-19T21:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.221494 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.221696 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.221731 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.221765 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.221789 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:01Z","lastTransitionTime":"2026-02-19T21:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.324423 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.324519 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.324552 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.324576 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.324595 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:01Z","lastTransitionTime":"2026-02-19T21:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.427047 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.427110 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.427163 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.427191 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.427210 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:01Z","lastTransitionTime":"2026-02-19T21:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.530021 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.530070 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.530082 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.530100 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.530113 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:01Z","lastTransitionTime":"2026-02-19T21:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.602396 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 02:01:40.665162469 +0000 UTC Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.632662 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.632750 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.632765 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.632785 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.632798 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:01Z","lastTransitionTime":"2026-02-19T21:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.735384 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.735433 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.735453 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.735479 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.735496 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:01Z","lastTransitionTime":"2026-02-19T21:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.838400 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.838463 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.838480 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.838505 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.838525 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:01Z","lastTransitionTime":"2026-02-19T21:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.941561 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.941625 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.941646 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.941670 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:01 crc kubenswrapper[4886]: I0219 21:01:01.941686 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:01Z","lastTransitionTime":"2026-02-19T21:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.044650 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.044693 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.044704 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.044722 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.044734 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:02Z","lastTransitionTime":"2026-02-19T21:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.147813 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.147861 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.147871 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.147887 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.147898 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:02Z","lastTransitionTime":"2026-02-19T21:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.250470 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.250507 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.250521 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.250538 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.250549 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:02Z","lastTransitionTime":"2026-02-19T21:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.353345 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.353407 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.353424 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.353452 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.353477 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:02Z","lastTransitionTime":"2026-02-19T21:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.456078 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.456124 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.456136 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.456152 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.456163 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:02Z","lastTransitionTime":"2026-02-19T21:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.558665 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.558713 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.558725 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.558743 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.558759 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:02Z","lastTransitionTime":"2026-02-19T21:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.600485 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.600526 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:01:02 crc kubenswrapper[4886]: E0219 21:01:02.600642 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.600742 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:01:02 crc kubenswrapper[4886]: E0219 21:01:02.600927 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.600994 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:02 crc kubenswrapper[4886]: E0219 21:01:02.601082 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:01:02 crc kubenswrapper[4886]: E0219 21:01:02.601162 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.602206 4886 scope.go:117] "RemoveContainer" containerID="1d5443a6295b93c79e43e0db5590935fe8e061ba198cd9dc1c9115f431aa8e93" Feb 19 21:01:02 crc kubenswrapper[4886]: E0219 21:01:02.602543 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.602771 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 12:51:26.354921357 +0000 UTC Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.661850 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.661905 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.661922 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.661945 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.661963 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:02Z","lastTransitionTime":"2026-02-19T21:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.765212 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.765338 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.765370 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.765403 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.765424 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:02Z","lastTransitionTime":"2026-02-19T21:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.868332 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.868393 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.868410 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.868439 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.868459 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:02Z","lastTransitionTime":"2026-02-19T21:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.971503 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.971548 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.971566 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.971590 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:02 crc kubenswrapper[4886]: I0219 21:01:02.971605 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:02Z","lastTransitionTime":"2026-02-19T21:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.075175 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.075243 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.075559 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.075615 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.075635 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:03Z","lastTransitionTime":"2026-02-19T21:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.179236 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.179311 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.179349 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.179374 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.179391 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:03Z","lastTransitionTime":"2026-02-19T21:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.282181 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.282228 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.282244 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.282299 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.282317 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:03Z","lastTransitionTime":"2026-02-19T21:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.384927 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.384969 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.384980 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.384996 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.385008 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:03Z","lastTransitionTime":"2026-02-19T21:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.491154 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.491206 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.491221 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.491239 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.491273 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:03Z","lastTransitionTime":"2026-02-19T21:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.594387 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.594444 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.594461 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.594486 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.594503 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:03Z","lastTransitionTime":"2026-02-19T21:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.603872 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 13:56:06.60411383 +0000 UTC Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.697877 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.697924 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.697936 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.697953 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.697965 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:03Z","lastTransitionTime":"2026-02-19T21:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.801229 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.801322 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.801342 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.801367 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.801385 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:03Z","lastTransitionTime":"2026-02-19T21:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.904784 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.904866 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.904890 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.904918 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:03 crc kubenswrapper[4886]: I0219 21:01:03.904938 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:03Z","lastTransitionTime":"2026-02-19T21:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.007744 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.007801 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.007818 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.007840 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.007858 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:04Z","lastTransitionTime":"2026-02-19T21:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.110581 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.110658 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.110739 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.110767 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.110784 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:04Z","lastTransitionTime":"2026-02-19T21:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.214348 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.214433 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.214450 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.214571 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.214628 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:04Z","lastTransitionTime":"2026-02-19T21:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.317982 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.318033 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.318044 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.318064 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.318079 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:04Z","lastTransitionTime":"2026-02-19T21:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.421057 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.421110 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.421122 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.421141 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.421152 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:04Z","lastTransitionTime":"2026-02-19T21:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.524601 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.524671 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.524690 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.524714 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.524741 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:04Z","lastTransitionTime":"2026-02-19T21:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.600696 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:01:04 crc kubenswrapper[4886]: E0219 21:01:04.600855 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.600933 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:04 crc kubenswrapper[4886]: E0219 21:01:04.600993 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.601300 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:01:04 crc kubenswrapper[4886]: E0219 21:01:04.601359 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.601648 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:01:04 crc kubenswrapper[4886]: E0219 21:01:04.601961 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.604761 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 14:25:39.579807908 +0000 UTC Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.628003 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.628081 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.628104 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.628137 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.628162 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:04Z","lastTransitionTime":"2026-02-19T21:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.731841 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.731918 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.731936 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.731961 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.731978 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:04Z","lastTransitionTime":"2026-02-19T21:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.834438 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.834501 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.834519 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.834542 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.834559 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:04Z","lastTransitionTime":"2026-02-19T21:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.937242 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.937350 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.937374 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.937404 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:04 crc kubenswrapper[4886]: I0219 21:01:04.937425 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:04Z","lastTransitionTime":"2026-02-19T21:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.040921 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.040984 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.041002 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.041026 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.041046 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:05Z","lastTransitionTime":"2026-02-19T21:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.143818 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.143879 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.143896 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.143921 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.143938 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:05Z","lastTransitionTime":"2026-02-19T21:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.247497 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.247589 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.247613 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.247644 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.247667 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:05Z","lastTransitionTime":"2026-02-19T21:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.350625 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.350684 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.350701 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.350725 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.350742 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:05Z","lastTransitionTime":"2026-02-19T21:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.454058 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.454127 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.454152 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.454181 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.454204 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:05Z","lastTransitionTime":"2026-02-19T21:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.557108 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.557160 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.557174 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.557190 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.557202 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:05Z","lastTransitionTime":"2026-02-19T21:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.605043 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 20:11:16.293542974 +0000 UTC Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.660238 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.660335 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.660354 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.660378 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.660398 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:05Z","lastTransitionTime":"2026-02-19T21:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.710633 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.710677 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.710687 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.710706 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.710719 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:05Z","lastTransitionTime":"2026-02-19T21:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:05 crc kubenswrapper[4886]: E0219 21:01:05.731986 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.736568 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.736617 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.736634 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.736657 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.736676 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:05Z","lastTransitionTime":"2026-02-19T21:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:05 crc kubenswrapper[4886]: E0219 21:01:05.757076 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.761921 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.761977 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.761997 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.762019 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.762036 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:05Z","lastTransitionTime":"2026-02-19T21:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:05 crc kubenswrapper[4886]: E0219 21:01:05.781852 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.786181 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.786211 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.786221 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.786237 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.786289 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:05Z","lastTransitionTime":"2026-02-19T21:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:05 crc kubenswrapper[4886]: E0219 21:01:05.806845 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.812043 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.812102 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.812119 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.812145 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.812166 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:05Z","lastTransitionTime":"2026-02-19T21:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:05 crc kubenswrapper[4886]: E0219 21:01:05.831140 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T21:01:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"82b5df03-b579-404f-8f39-285e5c50c205\\\",\\\"systemUUID\\\":\\\"c49a73fe-5379-4c2f-8aa6-d3284e3163e6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:05 crc kubenswrapper[4886]: E0219 21:01:05.831393 4886 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.833545 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.833598 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.833621 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.833646 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.833666 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:05Z","lastTransitionTime":"2026-02-19T21:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.937592 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.937659 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.937676 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.937700 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:05 crc kubenswrapper[4886]: I0219 21:01:05.937722 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:05Z","lastTransitionTime":"2026-02-19T21:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.040899 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.041516 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.041877 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.042078 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.042245 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:06Z","lastTransitionTime":"2026-02-19T21:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.144599 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.144923 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.145045 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.145215 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.145432 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:06Z","lastTransitionTime":"2026-02-19T21:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.247618 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.247650 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.247659 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.247673 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.247683 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:06Z","lastTransitionTime":"2026-02-19T21:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.350437 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.350480 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.350490 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.350505 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.350516 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:06Z","lastTransitionTime":"2026-02-19T21:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.453806 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.453881 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.453895 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.453926 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.453943 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:06Z","lastTransitionTime":"2026-02-19T21:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.556839 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.556898 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.556920 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.556950 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.556972 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:06Z","lastTransitionTime":"2026-02-19T21:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.600317 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.600356 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.600351 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:01:06 crc kubenswrapper[4886]: E0219 21:01:06.600534 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:01:06 crc kubenswrapper[4886]: E0219 21:01:06.600644 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:01:06 crc kubenswrapper[4886]: E0219 21:01:06.600768 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.601194 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:01:06 crc kubenswrapper[4886]: E0219 21:01:06.601401 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.605217 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 16:27:35.358077598 +0000 UTC Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.659872 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.659918 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.659934 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.659956 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.659972 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:06Z","lastTransitionTime":"2026-02-19T21:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.763295 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.763353 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.763377 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.763411 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.763431 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:06Z","lastTransitionTime":"2026-02-19T21:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.867230 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.867333 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.867353 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.867377 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.867395 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:06Z","lastTransitionTime":"2026-02-19T21:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.969828 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.969890 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.969907 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.969937 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:06 crc kubenswrapper[4886]: I0219 21:01:06.969957 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:06Z","lastTransitionTime":"2026-02-19T21:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.072838 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.072902 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.072922 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.072946 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.072963 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:07Z","lastTransitionTime":"2026-02-19T21:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.175881 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.175923 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.175934 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.175948 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.175959 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:07Z","lastTransitionTime":"2026-02-19T21:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.278188 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.278248 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.278291 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.278316 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.278334 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:07Z","lastTransitionTime":"2026-02-19T21:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.381421 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.381492 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.381510 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.381535 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.381553 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:07Z","lastTransitionTime":"2026-02-19T21:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.484550 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.484663 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.484686 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.484709 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.484759 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:07Z","lastTransitionTime":"2026-02-19T21:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.588065 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.588131 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.588148 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.588173 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.588191 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:07Z","lastTransitionTime":"2026-02-19T21:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.606021 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 18:59:53.225486289 +0000 UTC Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.691330 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.691383 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.691400 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.691429 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.691446 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:07Z","lastTransitionTime":"2026-02-19T21:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.795128 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.795197 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.795217 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.795243 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.795289 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:07Z","lastTransitionTime":"2026-02-19T21:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.898528 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.898593 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.898610 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.898639 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:07 crc kubenswrapper[4886]: I0219 21:01:07.898657 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:07Z","lastTransitionTime":"2026-02-19T21:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.001795 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.001836 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.001844 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.001861 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.001874 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:08Z","lastTransitionTime":"2026-02-19T21:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.104754 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.104806 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.104822 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.104848 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.104865 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:08Z","lastTransitionTime":"2026-02-19T21:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.208149 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.208226 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.208244 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.208294 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.208316 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:08Z","lastTransitionTime":"2026-02-19T21:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.311534 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.311584 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.311600 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.311624 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.311642 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:08Z","lastTransitionTime":"2026-02-19T21:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.415610 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.415685 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.415702 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.415760 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.415787 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:08Z","lastTransitionTime":"2026-02-19T21:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.519000 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.519082 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.519102 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.519134 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.519157 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:08Z","lastTransitionTime":"2026-02-19T21:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.601029 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.601241 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.601241 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.601331 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:01:08 crc kubenswrapper[4886]: E0219 21:01:08.601926 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:01:08 crc kubenswrapper[4886]: E0219 21:01:08.601662 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:01:08 crc kubenswrapper[4886]: E0219 21:01:08.601478 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:01:08 crc kubenswrapper[4886]: E0219 21:01:08.602031 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.606937 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 21:32:14.978274731 +0000 UTC Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.622596 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.622669 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.622691 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.622719 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.622742 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:08Z","lastTransitionTime":"2026-02-19T21:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.726503 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.726963 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.727100 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.727235 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.727420 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:08Z","lastTransitionTime":"2026-02-19T21:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.831344 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.831424 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.831447 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.831475 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.831494 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:08Z","lastTransitionTime":"2026-02-19T21:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.935071 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.935153 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.935171 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.935195 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:08 crc kubenswrapper[4886]: I0219 21:01:08.935212 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:08Z","lastTransitionTime":"2026-02-19T21:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.038181 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.039167 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.039338 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.039499 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.039642 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:09Z","lastTransitionTime":"2026-02-19T21:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.142645 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.142699 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.142713 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.142734 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.142749 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:09Z","lastTransitionTime":"2026-02-19T21:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.245605 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.245672 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.245693 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.245722 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.245740 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:09Z","lastTransitionTime":"2026-02-19T21:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.348807 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.348876 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.348899 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.348931 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.348948 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:09Z","lastTransitionTime":"2026-02-19T21:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.452673 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.452752 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.452789 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.452820 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.452843 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:09Z","lastTransitionTime":"2026-02-19T21:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.555639 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.555766 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.555786 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.555810 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.555832 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:09Z","lastTransitionTime":"2026-02-19T21:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.607112 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 22:46:52.537736116 +0000 UTC Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.659482 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.659542 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.659559 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.659583 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.659602 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:09Z","lastTransitionTime":"2026-02-19T21:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.762082 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.762158 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.762213 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.762249 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.762315 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:09Z","lastTransitionTime":"2026-02-19T21:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.866520 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.866597 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.866622 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.866653 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.866677 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:09Z","lastTransitionTime":"2026-02-19T21:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.970447 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.970520 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.970545 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.970576 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:09 crc kubenswrapper[4886]: I0219 21:01:09.970599 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:09Z","lastTransitionTime":"2026-02-19T21:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.073937 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.074236 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.074331 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.074365 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.074438 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:10Z","lastTransitionTime":"2026-02-19T21:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.177695 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.177754 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.177771 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.177794 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.177811 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:10Z","lastTransitionTime":"2026-02-19T21:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.280865 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.281352 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.281554 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.281774 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.281967 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:10Z","lastTransitionTime":"2026-02-19T21:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.385078 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.385125 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.385143 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.385165 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.385183 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:10Z","lastTransitionTime":"2026-02-19T21:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.488008 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.488078 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.488131 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.488163 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.488188 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:10Z","lastTransitionTime":"2026-02-19T21:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.591810 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.591898 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.591921 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.591952 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.591980 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:10Z","lastTransitionTime":"2026-02-19T21:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.600694 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:01:10 crc kubenswrapper[4886]: E0219 21:01:10.600853 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.600919 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.600916 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:01:10 crc kubenswrapper[4886]: E0219 21:01:10.601168 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:01:10 crc kubenswrapper[4886]: E0219 21:01:10.601297 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.601609 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:10 crc kubenswrapper[4886]: E0219 21:01:10.601811 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.607219 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 05:26:17.255112808 +0000 UTC Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.619066 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1876cc4e-c01b-4f98-9d4b-36c0ef5caf78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d23024e5a3956e078a2acd11d0c498d2eec026c002a01d0f24a8e08622b20e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://111bd67d90ee569c1b2e62757957dc4e871121561ff4eb507278cd8cfc637412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111bd67d90ee569c1b2e62757957dc4e871121561ff4eb507278cd8cfc637412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.639701 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d348c50-0d89-4a53-8364-14d6d129cd03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 20:59:52.093996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 20:59:52.094211 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 20:59:52.095369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3254082740/tls.crt::/tmp/serving-cert-3254082740/tls.key\\\\\\\"\\\\nI0219 20:59:52.596548 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 20:59:52.647586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 20:59:52.647613 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 20:59:52.647631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 20:59:52.647636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 20:59:52.655577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 20:59:52.655627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655639 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 20:59:52.655653 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 20:59:52.655662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 20:59:52.655670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 20:59:52.655678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 20:59:52.656142 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 20:59:52.658157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.659135 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6da4511-1657-4210-b103-d51f8f760a25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8226f3f96c8e0f74dc7aee6f00c5d28adcf17f73b265d966080de2a377ccbc79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021eafecf9529f6b892fa33e5c77ed773a467641e0d4c396d3017d54c871b3ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96e96243d456bec97fa26f33690a91299d02be730585b08655d7b18853a12e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.678580 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9ff8d610766de0dae4247185a60b5c99ca0b8c614c069e818829eb8a4228bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd13eff536b645de90d4b7bf22bc14f58721e9263c5169eebc3dcc02dc48a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.695676 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.695731 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.695750 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.695774 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.695793 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:10Z","lastTransitionTime":"2026-02-19T21:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.701482 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b250eb86-03a2-41d3-b71b-2264cc0b285b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8876375f61a50aecfd735fd73751d4319aeb635dfa0ffdcc5f3eebf898aafbcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23f5183c381c498261e74143b990d97fbc6c82cf3d69f706fea5b0a50d189c3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://314f811b5554ea91ed0f49a71f549b975f53e0925c035fc7f209467b674964d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9357e530b8d95d7634fae118f17c4ef7a90018449f9bcac716aaba1eebb8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117c3a5043d1e873c9f95b7314583039766a6af15a914c59b9ef49a5b16c6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6b5a2c733cf418ab5a973b5b750ba0f0a59efc53d77febc328df23018e3515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c41c8be4c8376c9f37521b8ce75ef0a30beaa6a8a206b3fc25e6a5f7f34dd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T21:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmxx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vfjj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.722867 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f72327b-1f93-42b9-be5e-0fe0aee6035f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d8290bf97adfd00491ce8c604b2f3d964f62905337468cbf1ecec35dd961b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8842619bd40d1bf4d78e8a32e327199ff5c0e380c87417ee0d862bbdd459a683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqfnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r47dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.749244 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614a0a2e9b694195aca7d60fbb1ca369cc12cf80eb739dc12e6219b83541fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.767997 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.787021 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.799519 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.799574 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.799591 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.799613 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.799629 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:10Z","lastTransitionTime":"2026-02-19T21:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.804428 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e291765d6132881d2ebc6acc17e7b96b25d98a7f8b4397e2e04dca4fde918696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.821811 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rnffz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f8fca5-68c6-4300-b2d8-64a58bf92a64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0732cf035727cf4d641fbec8677771a7eb77d51b7a2b7e60ee9566b3eb62a0ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:41Z\\\",\\\"message\\\":\\\"2026-02-19T20:59:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68a9f874-630d-4463-b8d9-9af41be5f863\\\\n2026-02-19T20:59:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68a9f874-630d-4463-b8d9-9af41be5f863 to /host/opt/cni/bin/\\\\n2026-02-19T20:59:56Z [verbose] multus-daemon started\\\\n2026-02-19T20:59:56Z [verbose] Readiness Indicator file check\\\\n2026-02-19T21:00:41Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T21:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmk4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rnffz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.836473 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lngzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee2e118-e60c-497a-bebd-d10319626e73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98872fa650c27f0b6dc3a9f707c1b92862315978a69e7f8b48e0c7850eb9b8b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lngzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.848855 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b096c32d-4192-4529-bc55-b05d09004007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439264773bba8a402a5a93a7e4cd4144bca343a272c791478470ddd37c3b86d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fst9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6stm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.862367 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6hp27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-922ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T21:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6hp27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.875005 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a730021c-71f9-4d10-b727-2ca3996f7315\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T21:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44c38ce1d45d528816fef389df305927a56ce194d7a4f9008c53ba1c8c3872d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30802e3825dde5e5a0fcaa97b68b101cbf379a89ae031f9e6825ab9d8300ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6374bb4d3110394978d5438ec9d728e7d5a0b926555d6aa0cff5582ff326ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006c62d0e5822ff5be8928565b24d3928faad03a7eed4c1e00cb7f97170723a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://006c62d0e5822ff5be8928565b24d3928faad03a7eed4c1e00cb7f97170723a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.894859 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.905531 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.905581 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.905598 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.905618 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.905633 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:10Z","lastTransitionTime":"2026-02-19T21:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.907635 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w7m4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fbb997b-bcbe-47fc-99ff-eb1e6b405954\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cda66332700cebfed6faff7efc48dc1c55800bf4323c4a1a80664df6001070f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dm9cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w7m4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:10 crc kubenswrapper[4886]: I0219 21:01:10.938004 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87d8f125-379b-4e5a-bedc-b55cf9edb00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5443a6295b93c79e43e0db5590935fe8e061ba198cd9dc1c9115f431aa8e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d5443a6295b93c79e43e0db5590935fe8e061ba198cd9dc1c9115f431aa8e93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T21:00:48Z\\\",\\\"message\\\":\\\"gistry/node-ca-lngzl in Admin Network Policy controller\\\\nF0219 21:00:48.501006 6964 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:00:48Z is after 2025-08-24T17:21:41Z]\\\\nI0219 21:00:48.501014 6964 admin_network_policy_pod.go:59] Finished syncing Pod openshift-image-registry/node-ca-lngzl Admin Network Policy controller: took 6.111µs\\\\nI0219 21:00:48.501061 6964 admin_network_policy_pod.go:56] Processing sync for Pod openshift-multus/multus-rnffz in Admin Network Policy controller\\\\nI0219 21:00:48.501067 6964 admin_network_policy_pod.go:59] Finished syncing Pod openshift-multus/multus-rnffz Admin Network Policy controller: took 6.68µs\\\\nI0219 21:00:48.501074 6964 admin_network_policy_pod.go:56] Processing sync for Pod openshift-network-di\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T21:00:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T20:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T20:59:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T20:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmjqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T20:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nclwh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T21:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.007883 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.007923 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.007934 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.007950 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.007961 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:11Z","lastTransitionTime":"2026-02-19T21:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.112123 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.112183 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.112201 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.112227 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.112247 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:11Z","lastTransitionTime":"2026-02-19T21:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.215660 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.215732 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.215744 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.215763 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.215797 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:11Z","lastTransitionTime":"2026-02-19T21:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.318449 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.318525 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.318549 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.318577 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.318598 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:11Z","lastTransitionTime":"2026-02-19T21:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.421655 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.421736 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.421763 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.421794 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.421817 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:11Z","lastTransitionTime":"2026-02-19T21:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.524063 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.524141 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.524163 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.524191 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.524212 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:11Z","lastTransitionTime":"2026-02-19T21:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.607744 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 00:22:48.372844419 +0000 UTC Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.626988 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.627250 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.627456 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.627957 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.628118 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:11Z","lastTransitionTime":"2026-02-19T21:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.731026 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.731085 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.731113 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.731145 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.731167 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:11Z","lastTransitionTime":"2026-02-19T21:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.833922 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.833967 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.833985 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.834007 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.834025 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:11Z","lastTransitionTime":"2026-02-19T21:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.937099 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.937159 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.937177 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.937202 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:11 crc kubenswrapper[4886]: I0219 21:01:11.937221 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:11Z","lastTransitionTime":"2026-02-19T21:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.040451 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.040545 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.040562 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.040587 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.040604 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:12Z","lastTransitionTime":"2026-02-19T21:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.143367 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.143425 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.143445 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.143469 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.143486 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:12Z","lastTransitionTime":"2026-02-19T21:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.246452 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.246588 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.246615 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.246644 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.246666 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:12Z","lastTransitionTime":"2026-02-19T21:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.349575 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.349635 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.349684 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.349717 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.349739 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:12Z","lastTransitionTime":"2026-02-19T21:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.452340 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.452398 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.452418 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.452445 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.452470 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:12Z","lastTransitionTime":"2026-02-19T21:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.529373 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs\") pod \"network-metrics-daemon-6hp27\" (UID: \"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\") " pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:12 crc kubenswrapper[4886]: E0219 21:01:12.529677 4886 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 21:01:12 crc kubenswrapper[4886]: E0219 21:01:12.530214 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs podName:1160fb8a-b59d-4b7b-8632-d2b2ead9bb36 nodeName:}" failed. No retries permitted until 2026-02-19 21:02:16.530174177 +0000 UTC m=+167.158017277 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs") pod "network-metrics-daemon-6hp27" (UID: "1160fb8a-b59d-4b7b-8632-d2b2ead9bb36") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.555440 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.555502 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.555520 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.555544 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.555561 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:12Z","lastTransitionTime":"2026-02-19T21:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.600443 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.600479 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.600545 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.600646 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:01:12 crc kubenswrapper[4886]: E0219 21:01:12.600639 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:01:12 crc kubenswrapper[4886]: E0219 21:01:12.600813 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:01:12 crc kubenswrapper[4886]: E0219 21:01:12.601022 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:01:12 crc kubenswrapper[4886]: E0219 21:01:12.601114 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.608138 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 21:44:34.660186115 +0000 UTC Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.657990 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.658198 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.658224 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.658247 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.658285 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:12Z","lastTransitionTime":"2026-02-19T21:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.761087 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.761176 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.761193 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.761216 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.761234 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:12Z","lastTransitionTime":"2026-02-19T21:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.864468 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.864526 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.864542 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.864567 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.864586 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:12Z","lastTransitionTime":"2026-02-19T21:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.967765 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.967828 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.967846 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.967903 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:12 crc kubenswrapper[4886]: I0219 21:01:12.967927 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:12Z","lastTransitionTime":"2026-02-19T21:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.070588 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.070649 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.070666 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.070691 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.070708 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:13Z","lastTransitionTime":"2026-02-19T21:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.173315 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.173365 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.173384 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.173409 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.173428 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:13Z","lastTransitionTime":"2026-02-19T21:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.275948 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.276008 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.276027 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.276049 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.276066 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:13Z","lastTransitionTime":"2026-02-19T21:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.378847 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.378904 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.378920 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.378947 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.378964 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:13Z","lastTransitionTime":"2026-02-19T21:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.482043 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.482096 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.482119 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.482149 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.482172 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:13Z","lastTransitionTime":"2026-02-19T21:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.585717 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.585807 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.585858 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.585883 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.585904 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:13Z","lastTransitionTime":"2026-02-19T21:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.609141 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 04:30:35.453169749 +0000 UTC Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.688956 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.689022 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.689045 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.689077 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.689097 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:13Z","lastTransitionTime":"2026-02-19T21:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.792821 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.792897 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.792922 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.792955 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.792979 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:13Z","lastTransitionTime":"2026-02-19T21:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.896076 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.896137 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.896155 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.896180 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.896196 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:13Z","lastTransitionTime":"2026-02-19T21:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.998637 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.998709 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.998769 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.998795 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:13 crc kubenswrapper[4886]: I0219 21:01:13.998813 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:13Z","lastTransitionTime":"2026-02-19T21:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.102338 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.102442 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.102462 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.102489 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.102512 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:14Z","lastTransitionTime":"2026-02-19T21:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.210775 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.211426 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.211510 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.211538 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.211558 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:14Z","lastTransitionTime":"2026-02-19T21:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.315334 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.315374 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.315384 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.315405 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.315417 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:14Z","lastTransitionTime":"2026-02-19T21:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.417970 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.418087 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.418116 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.418150 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.418176 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:14Z","lastTransitionTime":"2026-02-19T21:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.520710 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.520774 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.520792 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.520819 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.520839 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:14Z","lastTransitionTime":"2026-02-19T21:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.601110 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.601225 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:01:14 crc kubenswrapper[4886]: E0219 21:01:14.601341 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.601376 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.601395 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:14 crc kubenswrapper[4886]: E0219 21:01:14.601520 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:01:14 crc kubenswrapper[4886]: E0219 21:01:14.601627 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:01:14 crc kubenswrapper[4886]: E0219 21:01:14.601768 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.603360 4886 scope.go:117] "RemoveContainer" containerID="1d5443a6295b93c79e43e0db5590935fe8e061ba198cd9dc1c9115f431aa8e93" Feb 19 21:01:14 crc kubenswrapper[4886]: E0219 21:01:14.603983 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.609317 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 10:10:02.023845397 +0000 UTC Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.623774 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.623878 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.623900 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.623926 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.623945 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:14Z","lastTransitionTime":"2026-02-19T21:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.727118 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.727167 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.727185 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.727210 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.727228 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:14Z","lastTransitionTime":"2026-02-19T21:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.830360 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.830402 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.830413 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.830428 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.830438 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:14Z","lastTransitionTime":"2026-02-19T21:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.933719 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.933744 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.933752 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.933764 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:14 crc kubenswrapper[4886]: I0219 21:01:14.933773 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:14Z","lastTransitionTime":"2026-02-19T21:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.036625 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.036662 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.036672 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.036688 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.036699 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:15Z","lastTransitionTime":"2026-02-19T21:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.139496 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.139567 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.139590 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.139621 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.139646 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:15Z","lastTransitionTime":"2026-02-19T21:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.242854 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.242908 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.242940 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.242964 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.242979 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:15Z","lastTransitionTime":"2026-02-19T21:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.346217 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.346335 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.346359 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.346388 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.346407 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:15Z","lastTransitionTime":"2026-02-19T21:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.449317 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.449382 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.449399 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.449421 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.449443 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:15Z","lastTransitionTime":"2026-02-19T21:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.552669 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.552790 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.552810 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.552833 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.552850 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:15Z","lastTransitionTime":"2026-02-19T21:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.609734 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 07:38:27.733792938 +0000 UTC Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.655180 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.655256 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.655317 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.655347 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.655368 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:15Z","lastTransitionTime":"2026-02-19T21:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.758291 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.758351 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.758369 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.758393 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.758411 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:15Z","lastTransitionTime":"2026-02-19T21:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.861024 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.861080 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.861097 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.861122 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.861144 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:15Z","lastTransitionTime":"2026-02-19T21:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.950997 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.951085 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.951109 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.951143 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.951166 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:15Z","lastTransitionTime":"2026-02-19T21:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.983925 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.983973 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.983985 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.984004 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 21:01:15 crc kubenswrapper[4886]: I0219 21:01:15.984016 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T21:01:15Z","lastTransitionTime":"2026-02-19T21:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.014335 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-49lxq"] Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.014670 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-49lxq" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.017662 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.017922 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.018095 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.018246 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.080483 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3bc129a1-11f7-4e76-8da7-e378c4316c28-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-49lxq\" (UID: \"3bc129a1-11f7-4e76-8da7-e378c4316c28\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-49lxq" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.080538 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bc129a1-11f7-4e76-8da7-e378c4316c28-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-49lxq\" (UID: \"3bc129a1-11f7-4e76-8da7-e378c4316c28\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-49lxq" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.080567 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3bc129a1-11f7-4e76-8da7-e378c4316c28-service-ca\") pod \"cluster-version-operator-5c965bbfc6-49lxq\" (UID: \"3bc129a1-11f7-4e76-8da7-e378c4316c28\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-49lxq" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.080621 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3bc129a1-11f7-4e76-8da7-e378c4316c28-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-49lxq\" (UID: \"3bc129a1-11f7-4e76-8da7-e378c4316c28\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-49lxq" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.080650 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bc129a1-11f7-4e76-8da7-e378c4316c28-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-49lxq\" (UID: \"3bc129a1-11f7-4e76-8da7-e378c4316c28\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-49lxq" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.109679 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-w7m4j" podStartSLOduration=83.10965244 podStartE2EDuration="1m23.10965244s" podCreationTimestamp="2026-02-19 20:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:16.107791305 +0000 UTC m=+106.735634365" watchObservedRunningTime="2026-02-19 21:01:16.10965244 +0000 UTC m=+106.737495500" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.178156 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=50.178129759 podStartE2EDuration="50.178129759s" podCreationTimestamp="2026-02-19 21:00:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:16.177925634 +0000 UTC m=+106.805768744" watchObservedRunningTime="2026-02-19 21:01:16.178129759 +0000 UTC m=+106.805972849" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.181952 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bc129a1-11f7-4e76-8da7-e378c4316c28-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-49lxq\" (UID: \"3bc129a1-11f7-4e76-8da7-e378c4316c28\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-49lxq" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.182062 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3bc129a1-11f7-4e76-8da7-e378c4316c28-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-49lxq\" (UID: \"3bc129a1-11f7-4e76-8da7-e378c4316c28\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-49lxq" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.182172 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bc129a1-11f7-4e76-8da7-e378c4316c28-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-49lxq\" (UID: \"3bc129a1-11f7-4e76-8da7-e378c4316c28\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-49lxq" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.182239 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3bc129a1-11f7-4e76-8da7-e378c4316c28-service-ca\") pod \"cluster-version-operator-5c965bbfc6-49lxq\" (UID: \"3bc129a1-11f7-4e76-8da7-e378c4316c28\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-49lxq" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.182358 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3bc129a1-11f7-4e76-8da7-e378c4316c28-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-49lxq\" (UID: \"3bc129a1-11f7-4e76-8da7-e378c4316c28\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-49lxq" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.182510 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3bc129a1-11f7-4e76-8da7-e378c4316c28-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-49lxq\" (UID: \"3bc129a1-11f7-4e76-8da7-e378c4316c28\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-49lxq" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.182544 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3bc129a1-11f7-4e76-8da7-e378c4316c28-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-49lxq\" (UID: \"3bc129a1-11f7-4e76-8da7-e378c4316c28\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-49lxq" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.184309 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3bc129a1-11f7-4e76-8da7-e378c4316c28-service-ca\") pod \"cluster-version-operator-5c965bbfc6-49lxq\" (UID: \"3bc129a1-11f7-4e76-8da7-e378c4316c28\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-49lxq" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.191877 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bc129a1-11f7-4e76-8da7-e378c4316c28-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-49lxq\" (UID: \"3bc129a1-11f7-4e76-8da7-e378c4316c28\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-49lxq" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.203126 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bc129a1-11f7-4e76-8da7-e378c4316c28-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-49lxq\" (UID: \"3bc129a1-11f7-4e76-8da7-e378c4316c28\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-49lxq" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.228922 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=84.228889032 podStartE2EDuration="1m24.228889032s" podCreationTimestamp="2026-02-19 20:59:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:16.22881018 +0000 UTC m=+106.856653250" watchObservedRunningTime="2026-02-19 21:01:16.228889032 +0000 UTC m=+106.856732122" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.229566 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=84.229547908 podStartE2EDuration="1m24.229547908s" podCreationTimestamp="2026-02-19 20:59:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:16.206673077 +0000 UTC m=+106.834516227" watchObservedRunningTime="2026-02-19 21:01:16.229547908 +0000 UTC m=+106.857391038" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.270185 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-vfjj2" podStartSLOduration=83.270165336 podStartE2EDuration="1m23.270165336s" podCreationTimestamp="2026-02-19 20:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:16.268318172 +0000 UTC m=+106.896161252" watchObservedRunningTime="2026-02-19 21:01:16.270165336 +0000 UTC m=+106.898008386" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.282101 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r47dg" podStartSLOduration=82.282084773 podStartE2EDuration="1m22.282084773s" podCreationTimestamp="2026-02-19 20:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:16.281723434 +0000 UTC m=+106.909566524" watchObservedRunningTime="2026-02-19 21:01:16.282084773 +0000 UTC m=+106.909927823" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.318009 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=39.317980118 podStartE2EDuration="39.317980118s" podCreationTimestamp="2026-02-19 21:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:16.296037709 +0000 UTC m=+106.923880759" watchObservedRunningTime="2026-02-19 21:01:16.317980118 +0000 UTC m=+106.945823198" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.368810 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-rnffz" podStartSLOduration=83.368780451 podStartE2EDuration="1m23.368780451s" podCreationTimestamp="2026-02-19 20:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:16.368027933 +0000 UTC m=+106.995871003" watchObservedRunningTime="2026-02-19 21:01:16.368780451 +0000 UTC m=+106.996623541" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.382954 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-49lxq" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.403552 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-lngzl" podStartSLOduration=83.403532848 podStartE2EDuration="1m23.403532848s" podCreationTimestamp="2026-02-19 20:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:16.402423672 +0000 UTC m=+107.030266722" watchObservedRunningTime="2026-02-19 21:01:16.403532848 +0000 UTC m=+107.031375898" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.420061 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podStartSLOduration=83.420043836 podStartE2EDuration="1m23.420043836s" podCreationTimestamp="2026-02-19 20:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:16.41981019 +0000 UTC m=+107.047653240" watchObservedRunningTime="2026-02-19 21:01:16.420043836 +0000 UTC m=+107.047886886" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.600880 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.600970 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.600919 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:01:16 crc kubenswrapper[4886]: E0219 21:01:16.601088 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.601178 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:16 crc kubenswrapper[4886]: E0219 21:01:16.601429 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:01:16 crc kubenswrapper[4886]: E0219 21:01:16.601565 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:01:16 crc kubenswrapper[4886]: E0219 21:01:16.602134 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.610426 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 06:56:44.264445881 +0000 UTC Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.610523 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 19 21:01:16 crc kubenswrapper[4886]: I0219 21:01:16.625236 4886 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 21:01:17 crc kubenswrapper[4886]: I0219 21:01:17.303016 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-49lxq" event={"ID":"3bc129a1-11f7-4e76-8da7-e378c4316c28","Type":"ContainerStarted","Data":"cace19654d0aca52a1f75874ba4b575e74ea40b60c7037b71d03c9a692ad21d9"} Feb 19 21:01:17 crc kubenswrapper[4886]: I0219 21:01:17.303083 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-49lxq" event={"ID":"3bc129a1-11f7-4e76-8da7-e378c4316c28","Type":"ContainerStarted","Data":"fe56dbff501d72c44e0c5da65828999f67972ae81f21f2bfff96e57798342538"} Feb 19 21:01:18 crc kubenswrapper[4886]: I0219 21:01:18.600352 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:01:18 crc kubenswrapper[4886]: I0219 21:01:18.600393 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:01:18 crc kubenswrapper[4886]: E0219 21:01:18.600514 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:01:18 crc kubenswrapper[4886]: I0219 21:01:18.600597 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:18 crc kubenswrapper[4886]: I0219 21:01:18.600593 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:01:18 crc kubenswrapper[4886]: E0219 21:01:18.600761 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:01:18 crc kubenswrapper[4886]: E0219 21:01:18.600926 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:01:18 crc kubenswrapper[4886]: E0219 21:01:18.601104 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:01:19 crc kubenswrapper[4886]: I0219 21:01:19.621996 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-49lxq" podStartSLOduration=86.621968755 podStartE2EDuration="1m26.621968755s" podCreationTimestamp="2026-02-19 20:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:17.320370741 +0000 UTC m=+107.948213821" watchObservedRunningTime="2026-02-19 21:01:19.621968755 +0000 UTC m=+110.249811835" Feb 19 21:01:19 crc kubenswrapper[4886]: I0219 21:01:19.623686 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 19 21:01:20 crc kubenswrapper[4886]: I0219 21:01:20.600757 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:20 crc kubenswrapper[4886]: I0219 21:01:20.600949 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:01:20 crc kubenswrapper[4886]: E0219 21:01:20.602611 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:01:20 crc kubenswrapper[4886]: I0219 21:01:20.602683 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:01:20 crc kubenswrapper[4886]: I0219 21:01:20.602716 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:01:20 crc kubenswrapper[4886]: E0219 21:01:20.602967 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:01:20 crc kubenswrapper[4886]: E0219 21:01:20.603072 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:01:20 crc kubenswrapper[4886]: E0219 21:01:20.603191 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:01:20 crc kubenswrapper[4886]: I0219 21:01:20.652345 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=1.652326322 podStartE2EDuration="1.652326322s" podCreationTimestamp="2026-02-19 21:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:20.648655724 +0000 UTC m=+111.276498864" watchObservedRunningTime="2026-02-19 21:01:20.652326322 +0000 UTC m=+111.280169402" Feb 19 21:01:22 crc kubenswrapper[4886]: I0219 21:01:22.601139 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:01:22 crc kubenswrapper[4886]: I0219 21:01:22.601202 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:22 crc kubenswrapper[4886]: I0219 21:01:22.601327 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:01:22 crc kubenswrapper[4886]: E0219 21:01:22.601395 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:01:22 crc kubenswrapper[4886]: I0219 21:01:22.601446 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:01:22 crc kubenswrapper[4886]: E0219 21:01:22.601579 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:01:22 crc kubenswrapper[4886]: E0219 21:01:22.601731 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:01:22 crc kubenswrapper[4886]: E0219 21:01:22.601798 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:01:24 crc kubenswrapper[4886]: I0219 21:01:24.600657 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:01:24 crc kubenswrapper[4886]: E0219 21:01:24.601129 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:01:24 crc kubenswrapper[4886]: I0219 21:01:24.600808 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:01:24 crc kubenswrapper[4886]: E0219 21:01:24.601232 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:01:24 crc kubenswrapper[4886]: I0219 21:01:24.600820 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:24 crc kubenswrapper[4886]: E0219 21:01:24.601388 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:01:24 crc kubenswrapper[4886]: I0219 21:01:24.600705 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:01:24 crc kubenswrapper[4886]: E0219 21:01:24.601494 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:01:26 crc kubenswrapper[4886]: I0219 21:01:26.601152 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:01:26 crc kubenswrapper[4886]: I0219 21:01:26.601330 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:26 crc kubenswrapper[4886]: E0219 21:01:26.601359 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:01:26 crc kubenswrapper[4886]: I0219 21:01:26.601517 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:01:26 crc kubenswrapper[4886]: E0219 21:01:26.601716 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:01:26 crc kubenswrapper[4886]: E0219 21:01:26.602007 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:01:26 crc kubenswrapper[4886]: I0219 21:01:26.602126 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:01:26 crc kubenswrapper[4886]: E0219 21:01:26.602878 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:01:26 crc kubenswrapper[4886]: I0219 21:01:26.603491 4886 scope.go:117] "RemoveContainer" containerID="1d5443a6295b93c79e43e0db5590935fe8e061ba198cd9dc1c9115f431aa8e93" Feb 19 21:01:26 crc kubenswrapper[4886]: E0219 21:01:26.603807 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nclwh_openshift-ovn-kubernetes(87d8f125-379b-4e5a-bedc-b55cf9edb00a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" Feb 19 21:01:28 crc kubenswrapper[4886]: I0219 21:01:28.341981 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rnffz_83f8fca5-68c6-4300-b2d8-64a58bf92a64/kube-multus/1.log" Feb 19 21:01:28 crc kubenswrapper[4886]: I0219 21:01:28.342699 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rnffz_83f8fca5-68c6-4300-b2d8-64a58bf92a64/kube-multus/0.log" Feb 19 21:01:28 crc kubenswrapper[4886]: I0219 21:01:28.342745 4886 generic.go:334] "Generic (PLEG): container finished" podID="83f8fca5-68c6-4300-b2d8-64a58bf92a64" containerID="0732cf035727cf4d641fbec8677771a7eb77d51b7a2b7e60ee9566b3eb62a0ad" exitCode=1 Feb 19 21:01:28 crc kubenswrapper[4886]: I0219 21:01:28.342775 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rnffz" event={"ID":"83f8fca5-68c6-4300-b2d8-64a58bf92a64","Type":"ContainerDied","Data":"0732cf035727cf4d641fbec8677771a7eb77d51b7a2b7e60ee9566b3eb62a0ad"} Feb 19 21:01:28 crc kubenswrapper[4886]: I0219 21:01:28.342808 4886 scope.go:117] "RemoveContainer" containerID="3a055b309d3d47002e1e2bf9af96c75c472e0a0dd987265f3d4fbc5211ed4b10" Feb 19 21:01:28 crc kubenswrapper[4886]: I0219 21:01:28.343470 4886 scope.go:117] "RemoveContainer" containerID="0732cf035727cf4d641fbec8677771a7eb77d51b7a2b7e60ee9566b3eb62a0ad" Feb 19 21:01:28 crc kubenswrapper[4886]: E0219 21:01:28.343673 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-rnffz_openshift-multus(83f8fca5-68c6-4300-b2d8-64a58bf92a64)\"" pod="openshift-multus/multus-rnffz" podUID="83f8fca5-68c6-4300-b2d8-64a58bf92a64" Feb 19 21:01:28 crc kubenswrapper[4886]: I0219 21:01:28.601242 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:01:28 crc kubenswrapper[4886]: I0219 21:01:28.601395 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:28 crc kubenswrapper[4886]: I0219 21:01:28.601310 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:01:28 crc kubenswrapper[4886]: E0219 21:01:28.601486 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:01:28 crc kubenswrapper[4886]: I0219 21:01:28.601250 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:01:28 crc kubenswrapper[4886]: E0219 21:01:28.601609 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:01:28 crc kubenswrapper[4886]: E0219 21:01:28.601780 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:01:28 crc kubenswrapper[4886]: E0219 21:01:28.601932 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:01:29 crc kubenswrapper[4886]: I0219 21:01:29.348975 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rnffz_83f8fca5-68c6-4300-b2d8-64a58bf92a64/kube-multus/1.log" Feb 19 21:01:30 crc kubenswrapper[4886]: E0219 21:01:30.539600 4886 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 19 21:01:30 crc kubenswrapper[4886]: I0219 21:01:30.601584 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:01:30 crc kubenswrapper[4886]: I0219 21:01:30.601684 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:01:30 crc kubenswrapper[4886]: I0219 21:01:30.601684 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:01:30 crc kubenswrapper[4886]: E0219 21:01:30.602734 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:01:30 crc kubenswrapper[4886]: I0219 21:01:30.602789 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:30 crc kubenswrapper[4886]: E0219 21:01:30.602922 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:01:30 crc kubenswrapper[4886]: E0219 21:01:30.602999 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:01:30 crc kubenswrapper[4886]: E0219 21:01:30.603126 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:01:30 crc kubenswrapper[4886]: E0219 21:01:30.716824 4886 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 21:01:32 crc kubenswrapper[4886]: I0219 21:01:32.601091 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:01:32 crc kubenswrapper[4886]: E0219 21:01:32.601313 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:01:32 crc kubenswrapper[4886]: I0219 21:01:32.601372 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:01:32 crc kubenswrapper[4886]: E0219 21:01:32.601579 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:01:32 crc kubenswrapper[4886]: I0219 21:01:32.601593 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:32 crc kubenswrapper[4886]: E0219 21:01:32.601773 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:01:32 crc kubenswrapper[4886]: I0219 21:01:32.602522 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:01:32 crc kubenswrapper[4886]: E0219 21:01:32.602663 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:01:34 crc kubenswrapper[4886]: I0219 21:01:34.600659 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:34 crc kubenswrapper[4886]: E0219 21:01:34.600876 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:01:34 crc kubenswrapper[4886]: I0219 21:01:34.601491 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:01:34 crc kubenswrapper[4886]: I0219 21:01:34.601627 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:01:34 crc kubenswrapper[4886]: E0219 21:01:34.601674 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:01:34 crc kubenswrapper[4886]: E0219 21:01:34.601735 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:01:34 crc kubenswrapper[4886]: I0219 21:01:34.601799 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:01:34 crc kubenswrapper[4886]: E0219 21:01:34.601992 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:01:35 crc kubenswrapper[4886]: E0219 21:01:35.718428 4886 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 21:01:36 crc kubenswrapper[4886]: I0219 21:01:36.601223 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:01:36 crc kubenswrapper[4886]: I0219 21:01:36.601365 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:01:36 crc kubenswrapper[4886]: E0219 21:01:36.601558 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:01:36 crc kubenswrapper[4886]: I0219 21:01:36.601611 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:36 crc kubenswrapper[4886]: I0219 21:01:36.601643 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:01:36 crc kubenswrapper[4886]: E0219 21:01:36.601749 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:01:36 crc kubenswrapper[4886]: E0219 21:01:36.601954 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:01:36 crc kubenswrapper[4886]: E0219 21:01:36.602125 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:01:38 crc kubenswrapper[4886]: I0219 21:01:38.600244 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:01:38 crc kubenswrapper[4886]: I0219 21:01:38.600336 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:01:38 crc kubenswrapper[4886]: I0219 21:01:38.600444 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:01:38 crc kubenswrapper[4886]: E0219 21:01:38.600452 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:01:38 crc kubenswrapper[4886]: I0219 21:01:38.600481 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:38 crc kubenswrapper[4886]: E0219 21:01:38.600879 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:01:38 crc kubenswrapper[4886]: I0219 21:01:38.600971 4886 scope.go:117] "RemoveContainer" containerID="1d5443a6295b93c79e43e0db5590935fe8e061ba198cd9dc1c9115f431aa8e93" Feb 19 21:01:38 crc kubenswrapper[4886]: E0219 21:01:38.601041 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:01:38 crc kubenswrapper[4886]: E0219 21:01:38.601177 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:01:39 crc kubenswrapper[4886]: I0219 21:01:39.391709 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nclwh_87d8f125-379b-4e5a-bedc-b55cf9edb00a/ovnkube-controller/3.log" Feb 19 21:01:39 crc kubenswrapper[4886]: I0219 21:01:39.397501 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerStarted","Data":"ed1f61bce5fe0d72a072fa0720c7c32c59b08203d401027006d86b34f26ec8a5"} Feb 19 21:01:39 crc kubenswrapper[4886]: I0219 21:01:39.398020 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 21:01:39 crc kubenswrapper[4886]: I0219 21:01:39.450918 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" podStartSLOduration=106.450899334 podStartE2EDuration="1m46.450899334s" podCreationTimestamp="2026-02-19 20:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:39.44241028 +0000 UTC m=+130.070253360" watchObservedRunningTime="2026-02-19 21:01:39.450899334 +0000 UTC m=+130.078742384" Feb 19 21:01:39 crc kubenswrapper[4886]: I0219 21:01:39.744598 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6hp27"] Feb 19 21:01:39 crc kubenswrapper[4886]: I0219 21:01:39.744707 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:39 crc kubenswrapper[4886]: E0219 21:01:39.744784 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:01:40 crc kubenswrapper[4886]: I0219 21:01:40.600545 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:01:40 crc kubenswrapper[4886]: E0219 21:01:40.600732 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:01:40 crc kubenswrapper[4886]: I0219 21:01:40.601524 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:01:40 crc kubenswrapper[4886]: I0219 21:01:40.601647 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:01:40 crc kubenswrapper[4886]: E0219 21:01:40.603101 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:01:40 crc kubenswrapper[4886]: E0219 21:01:40.603319 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:01:40 crc kubenswrapper[4886]: E0219 21:01:40.720214 4886 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 21:01:41 crc kubenswrapper[4886]: I0219 21:01:41.601093 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:41 crc kubenswrapper[4886]: E0219 21:01:41.601317 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:01:42 crc kubenswrapper[4886]: I0219 21:01:42.600896 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:01:42 crc kubenswrapper[4886]: I0219 21:01:42.600946 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:01:42 crc kubenswrapper[4886]: I0219 21:01:42.601024 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:01:42 crc kubenswrapper[4886]: E0219 21:01:42.601632 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:01:42 crc kubenswrapper[4886]: E0219 21:01:42.601751 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:01:42 crc kubenswrapper[4886]: E0219 21:01:42.602324 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:01:43 crc kubenswrapper[4886]: I0219 21:01:43.600995 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:43 crc kubenswrapper[4886]: I0219 21:01:43.601475 4886 scope.go:117] "RemoveContainer" containerID="0732cf035727cf4d641fbec8677771a7eb77d51b7a2b7e60ee9566b3eb62a0ad" Feb 19 21:01:43 crc kubenswrapper[4886]: E0219 21:01:43.602148 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:01:44 crc kubenswrapper[4886]: I0219 21:01:44.418472 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rnffz_83f8fca5-68c6-4300-b2d8-64a58bf92a64/kube-multus/1.log" Feb 19 21:01:44 crc kubenswrapper[4886]: I0219 21:01:44.418564 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rnffz" event={"ID":"83f8fca5-68c6-4300-b2d8-64a58bf92a64","Type":"ContainerStarted","Data":"b1d24b72a538ea2a24b25c0abe1e665851c85706dcd18e501c91a69c90bfa883"} Feb 19 21:01:44 crc kubenswrapper[4886]: I0219 21:01:44.600358 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:01:44 crc kubenswrapper[4886]: I0219 21:01:44.600430 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:01:44 crc kubenswrapper[4886]: E0219 21:01:44.600544 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 21:01:44 crc kubenswrapper[4886]: E0219 21:01:44.600616 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 21:01:44 crc kubenswrapper[4886]: I0219 21:01:44.600721 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:01:44 crc kubenswrapper[4886]: E0219 21:01:44.600847 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 21:01:45 crc kubenswrapper[4886]: I0219 21:01:45.600777 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:45 crc kubenswrapper[4886]: E0219 21:01:45.600998 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6hp27" podUID="1160fb8a-b59d-4b7b-8632-d2b2ead9bb36" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.556431 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.600950 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.601496 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.602048 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.605453 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.607517 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.607783 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.610526 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jg8st"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.611179 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.611225 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jg8st" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.612330 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.613938 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-r6ntg"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.614671 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-pt2hw"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.614913 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6ntg" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.615240 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.615485 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.615734 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pt2hw" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.616536 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.617077 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.617131 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.622173 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.623662 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.625367 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.626213 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.626326 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.626651 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.629945 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.630504 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bwbrv"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.641100 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.641180 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.641414 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.641107 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.641542 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.655124 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.655457 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.655621 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.655795 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.658390 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.658952 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x82g8"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.659069 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bwbrv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.659314 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.659331 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5f4xd"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.662241 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.660967 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723-client-ca\") pod \"route-controller-manager-6576b87f9c-lk9fd\" (UID: \"1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.663578 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a66fafd0-fa91-4368-8262-88fc7ef86dfa-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kg2xv\" (UID: \"a66fafd0-fa91-4368-8262-88fc7ef86dfa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.665059 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4246fabd-d555-42f3-a01b-fbd9c54c5af1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jg8st\" (UID: \"4246fabd-d555-42f3-a01b-fbd9c54c5af1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jg8st" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.669061 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a66fafd0-fa91-4368-8262-88fc7ef86dfa-audit-dir\") pod \"apiserver-7bbb656c7d-kg2xv\" (UID: \"a66fafd0-fa91-4368-8262-88fc7ef86dfa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.670165 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a66fafd0-fa91-4368-8262-88fc7ef86dfa-serving-cert\") pod \"apiserver-7bbb656c7d-kg2xv\" (UID: \"a66fafd0-fa91-4368-8262-88fc7ef86dfa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.670677 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74f8h\" (UniqueName: \"kubernetes.io/projected/76ea6656-5d7e-406b-b36e-3d72b9b0a847-kube-api-access-74f8h\") pod \"machine-api-operator-5694c8668f-bwbrv\" (UID: \"76ea6656-5d7e-406b-b36e-3d72b9b0a847\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwbrv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.670898 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/76ea6656-5d7e-406b-b36e-3d72b9b0a847-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bwbrv\" (UID: \"76ea6656-5d7e-406b-b36e-3d72b9b0a847\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwbrv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.671344 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a66fafd0-fa91-4368-8262-88fc7ef86dfa-encryption-config\") pod \"apiserver-7bbb656c7d-kg2xv\" (UID: \"a66fafd0-fa91-4368-8262-88fc7ef86dfa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.671571 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/388b84d7-1fa4-4c41-88f3-85a8b836c6a0-serving-cert\") pod \"openshift-config-operator-7777fb866f-r6ntg\" (UID: \"388b84d7-1fa4-4c41-88f3-85a8b836c6a0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6ntg" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.671750 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/388b84d7-1fa4-4c41-88f3-85a8b836c6a0-available-featuregates\") pod \"openshift-config-operator-7777fb866f-r6ntg\" (UID: \"388b84d7-1fa4-4c41-88f3-85a8b836c6a0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6ntg" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.664578 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-sk46q"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.659421 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.659643 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.676649 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4246fabd-d555-42f3-a01b-fbd9c54c5af1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jg8st\" (UID: \"4246fabd-d555-42f3-a01b-fbd9c54c5af1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jg8st" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.677078 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kcx8\" (UniqueName: \"kubernetes.io/projected/c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec-kube-api-access-6kcx8\") pod \"machine-approver-56656f9798-pt2hw\" (UID: \"c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pt2hw" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.677115 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723-serving-cert\") pod \"route-controller-manager-6576b87f9c-lk9fd\" (UID: \"1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.677172 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723-config\") pod \"route-controller-manager-6576b87f9c-lk9fd\" (UID: \"1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.677235 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a66fafd0-fa91-4368-8262-88fc7ef86dfa-etcd-client\") pod \"apiserver-7bbb656c7d-kg2xv\" (UID: \"a66fafd0-fa91-4368-8262-88fc7ef86dfa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.677298 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a66fafd0-fa91-4368-8262-88fc7ef86dfa-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kg2xv\" (UID: \"a66fafd0-fa91-4368-8262-88fc7ef86dfa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.677320 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msj55\" (UniqueName: \"kubernetes.io/projected/a66fafd0-fa91-4368-8262-88fc7ef86dfa-kube-api-access-msj55\") pod \"apiserver-7bbb656c7d-kg2xv\" (UID: \"a66fafd0-fa91-4368-8262-88fc7ef86dfa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.677347 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec-auth-proxy-config\") pod \"machine-approver-56656f9798-pt2hw\" (UID: \"c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pt2hw" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.677372 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx74p\" (UniqueName: \"kubernetes.io/projected/1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723-kube-api-access-tx74p\") pod \"route-controller-manager-6576b87f9c-lk9fd\" (UID: \"1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.677399 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5lfk\" (UniqueName: \"kubernetes.io/projected/4246fabd-d555-42f3-a01b-fbd9c54c5af1-kube-api-access-r5lfk\") pod \"openshift-apiserver-operator-796bbdcf4f-jg8st\" (UID: \"4246fabd-d555-42f3-a01b-fbd9c54c5af1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jg8st" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.677427 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec-machine-approver-tls\") pod \"machine-approver-56656f9798-pt2hw\" (UID: \"c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pt2hw" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.677445 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a66fafd0-fa91-4368-8262-88fc7ef86dfa-audit-policies\") pod \"apiserver-7bbb656c7d-kg2xv\" (UID: \"a66fafd0-fa91-4368-8262-88fc7ef86dfa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.677468 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76ea6656-5d7e-406b-b36e-3d72b9b0a847-config\") pod \"machine-api-operator-5694c8668f-bwbrv\" (UID: \"76ea6656-5d7e-406b-b36e-3d72b9b0a847\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwbrv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.677489 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5dhf\" (UniqueName: \"kubernetes.io/projected/388b84d7-1fa4-4c41-88f3-85a8b836c6a0-kube-api-access-n5dhf\") pod \"openshift-config-operator-7777fb866f-r6ntg\" (UID: \"388b84d7-1fa4-4c41-88f3-85a8b836c6a0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6ntg" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.677520 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec-config\") pod \"machine-approver-56656f9798-pt2hw\" (UID: \"c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pt2hw" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.677539 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/76ea6656-5d7e-406b-b36e-3d72b9b0a847-images\") pod \"machine-api-operator-5694c8668f-bwbrv\" (UID: \"76ea6656-5d7e-406b-b36e-3d72b9b0a847\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwbrv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.659829 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.659918 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.659942 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.659976 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.677954 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-q85bd"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.659994 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.660159 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.660215 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.660301 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.678311 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.662717 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.678456 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.678482 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.669480 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.669721 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.669789 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.669903 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.670078 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.675802 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.675840 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.675906 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.675927 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.675946 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.676008 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.676514 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.676566 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.676675 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.676752 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.679821 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.676792 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.681494 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tdlnc"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.681971 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-77m65"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.682255 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-77m65" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.682480 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tdlnc" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.690755 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-rn5ms"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.691671 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.691685 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.695302 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-6lvw2"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.695781 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6lvw2" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.697101 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.697906 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.698094 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.699819 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.700094 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.700445 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.700763 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.700922 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.701040 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.701379 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.701501 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.701611 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.701713 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.702037 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.702294 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.702407 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.702695 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4jt8"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.702996 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.703022 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.703222 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.703456 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.703610 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hbjk6"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.704115 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-r6ntg"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.704247 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hbjk6" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.704479 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.704750 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.705051 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4jt8" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.715087 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.723287 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.724006 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.724714 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.725132 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.738574 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jg8st"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.745017 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.746153 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.746340 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sp9km"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.746385 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.746494 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.746609 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.746769 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sp9km" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.747546 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.750237 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.750497 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.750550 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.750585 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.751345 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.751494 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.754369 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.754660 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.754749 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.755782 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tp6s4"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.756287 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.760055 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.764375 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-kjzmh"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.764483 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.764945 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-kjzmh" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.766223 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.771320 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.772223 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.773385 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bz9b2"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.774090 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bz9b2" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.774377 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.774497 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-2wz7p"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.775002 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.776052 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2brfc"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.776501 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2brfc" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778024 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx74p\" (UniqueName: \"kubernetes.io/projected/1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723-kube-api-access-tx74p\") pod \"route-controller-manager-6576b87f9c-lk9fd\" (UID: \"1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778054 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5lfk\" (UniqueName: \"kubernetes.io/projected/4246fabd-d555-42f3-a01b-fbd9c54c5af1-kube-api-access-r5lfk\") pod \"openshift-apiserver-operator-796bbdcf4f-jg8st\" (UID: \"4246fabd-d555-42f3-a01b-fbd9c54c5af1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jg8st" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778075 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec-machine-approver-tls\") pod \"machine-approver-56656f9798-pt2hw\" (UID: \"c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pt2hw" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778095 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a66fafd0-fa91-4368-8262-88fc7ef86dfa-audit-policies\") pod \"apiserver-7bbb656c7d-kg2xv\" (UID: \"a66fafd0-fa91-4368-8262-88fc7ef86dfa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778111 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76ea6656-5d7e-406b-b36e-3d72b9b0a847-config\") pod \"machine-api-operator-5694c8668f-bwbrv\" (UID: \"76ea6656-5d7e-406b-b36e-3d72b9b0a847\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwbrv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778128 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5dhf\" (UniqueName: \"kubernetes.io/projected/388b84d7-1fa4-4c41-88f3-85a8b836c6a0-kube-api-access-n5dhf\") pod \"openshift-config-operator-7777fb866f-r6ntg\" (UID: \"388b84d7-1fa4-4c41-88f3-85a8b836c6a0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6ntg" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778152 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec-config\") pod \"machine-approver-56656f9798-pt2hw\" (UID: \"c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pt2hw" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778166 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/76ea6656-5d7e-406b-b36e-3d72b9b0a847-images\") pod \"machine-api-operator-5694c8668f-bwbrv\" (UID: \"76ea6656-5d7e-406b-b36e-3d72b9b0a847\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwbrv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778184 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723-client-ca\") pod \"route-controller-manager-6576b87f9c-lk9fd\" (UID: \"1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778203 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a66fafd0-fa91-4368-8262-88fc7ef86dfa-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kg2xv\" (UID: \"a66fafd0-fa91-4368-8262-88fc7ef86dfa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778219 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4246fabd-d555-42f3-a01b-fbd9c54c5af1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jg8st\" (UID: \"4246fabd-d555-42f3-a01b-fbd9c54c5af1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jg8st" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778293 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a66fafd0-fa91-4368-8262-88fc7ef86dfa-audit-dir\") pod \"apiserver-7bbb656c7d-kg2xv\" (UID: \"a66fafd0-fa91-4368-8262-88fc7ef86dfa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778440 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a66fafd0-fa91-4368-8262-88fc7ef86dfa-serving-cert\") pod \"apiserver-7bbb656c7d-kg2xv\" (UID: \"a66fafd0-fa91-4368-8262-88fc7ef86dfa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778461 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74f8h\" (UniqueName: \"kubernetes.io/projected/76ea6656-5d7e-406b-b36e-3d72b9b0a847-kube-api-access-74f8h\") pod \"machine-api-operator-5694c8668f-bwbrv\" (UID: \"76ea6656-5d7e-406b-b36e-3d72b9b0a847\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwbrv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778477 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/76ea6656-5d7e-406b-b36e-3d72b9b0a847-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bwbrv\" (UID: \"76ea6656-5d7e-406b-b36e-3d72b9b0a847\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwbrv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778500 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a66fafd0-fa91-4368-8262-88fc7ef86dfa-encryption-config\") pod \"apiserver-7bbb656c7d-kg2xv\" (UID: \"a66fafd0-fa91-4368-8262-88fc7ef86dfa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778524 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/388b84d7-1fa4-4c41-88f3-85a8b836c6a0-serving-cert\") pod \"openshift-config-operator-7777fb866f-r6ntg\" (UID: \"388b84d7-1fa4-4c41-88f3-85a8b836c6a0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6ntg" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778542 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/388b84d7-1fa4-4c41-88f3-85a8b836c6a0-available-featuregates\") pod \"openshift-config-operator-7777fb866f-r6ntg\" (UID: \"388b84d7-1fa4-4c41-88f3-85a8b836c6a0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6ntg" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778559 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4246fabd-d555-42f3-a01b-fbd9c54c5af1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jg8st\" (UID: \"4246fabd-d555-42f3-a01b-fbd9c54c5af1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jg8st" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778583 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kcx8\" (UniqueName: \"kubernetes.io/projected/c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec-kube-api-access-6kcx8\") pod \"machine-approver-56656f9798-pt2hw\" (UID: \"c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pt2hw" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778599 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723-serving-cert\") pod \"route-controller-manager-6576b87f9c-lk9fd\" (UID: \"1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778604 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778616 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723-config\") pod \"route-controller-manager-6576b87f9c-lk9fd\" (UID: \"1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778646 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a66fafd0-fa91-4368-8262-88fc7ef86dfa-etcd-client\") pod \"apiserver-7bbb656c7d-kg2xv\" (UID: \"a66fafd0-fa91-4368-8262-88fc7ef86dfa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778663 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a66fafd0-fa91-4368-8262-88fc7ef86dfa-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kg2xv\" (UID: \"a66fafd0-fa91-4368-8262-88fc7ef86dfa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778774 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a66fafd0-fa91-4368-8262-88fc7ef86dfa-audit-dir\") pod \"apiserver-7bbb656c7d-kg2xv\" (UID: \"a66fafd0-fa91-4368-8262-88fc7ef86dfa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.778937 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msj55\" (UniqueName: \"kubernetes.io/projected/a66fafd0-fa91-4368-8262-88fc7ef86dfa-kube-api-access-msj55\") pod \"apiserver-7bbb656c7d-kg2xv\" (UID: \"a66fafd0-fa91-4368-8262-88fc7ef86dfa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.779594 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec-auth-proxy-config\") pod \"machine-approver-56656f9798-pt2hw\" (UID: \"c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pt2hw" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.780124 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/76ea6656-5d7e-406b-b36e-3d72b9b0a847-images\") pod \"machine-api-operator-5694c8668f-bwbrv\" (UID: \"76ea6656-5d7e-406b-b36e-3d72b9b0a847\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwbrv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.780309 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wfhmg"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.780644 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wfhmg" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.780812 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76ea6656-5d7e-406b-b36e-3d72b9b0a847-config\") pod \"machine-api-operator-5694c8668f-bwbrv\" (UID: \"76ea6656-5d7e-406b-b36e-3d72b9b0a847\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwbrv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.780856 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4246fabd-d555-42f3-a01b-fbd9c54c5af1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jg8st\" (UID: \"4246fabd-d555-42f3-a01b-fbd9c54c5af1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jg8st" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.781182 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/388b84d7-1fa4-4c41-88f3-85a8b836c6a0-available-featuregates\") pod \"openshift-config-operator-7777fb866f-r6ntg\" (UID: \"388b84d7-1fa4-4c41-88f3-85a8b836c6a0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6ntg" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.781247 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a66fafd0-fa91-4368-8262-88fc7ef86dfa-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kg2xv\" (UID: \"a66fafd0-fa91-4368-8262-88fc7ef86dfa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.781345 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a66fafd0-fa91-4368-8262-88fc7ef86dfa-audit-policies\") pod \"apiserver-7bbb656c7d-kg2xv\" (UID: \"a66fafd0-fa91-4368-8262-88fc7ef86dfa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.781743 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a66fafd0-fa91-4368-8262-88fc7ef86dfa-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kg2xv\" (UID: \"a66fafd0-fa91-4368-8262-88fc7ef86dfa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.781850 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.782126 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723-config\") pod \"route-controller-manager-6576b87f9c-lk9fd\" (UID: \"1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.782427 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec-config\") pod \"machine-approver-56656f9798-pt2hw\" (UID: \"c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pt2hw" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.782636 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec-auth-proxy-config\") pod \"machine-approver-56656f9798-pt2hw\" (UID: \"c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pt2hw" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.782643 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.782792 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723-client-ca\") pod \"route-controller-manager-6576b87f9c-lk9fd\" (UID: \"1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.783683 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78xq9"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.784153 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a66fafd0-fa91-4368-8262-88fc7ef86dfa-serving-cert\") pod \"apiserver-7bbb656c7d-kg2xv\" (UID: \"a66fafd0-fa91-4368-8262-88fc7ef86dfa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.784840 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78xq9" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.795518 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.796241 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a66fafd0-fa91-4368-8262-88fc7ef86dfa-etcd-client\") pod \"apiserver-7bbb656c7d-kg2xv\" (UID: \"a66fafd0-fa91-4368-8262-88fc7ef86dfa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.799640 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/388b84d7-1fa4-4c41-88f3-85a8b836c6a0-serving-cert\") pod \"openshift-config-operator-7777fb866f-r6ntg\" (UID: \"388b84d7-1fa4-4c41-88f3-85a8b836c6a0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6ntg" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.800083 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/76ea6656-5d7e-406b-b36e-3d72b9b0a847-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bwbrv\" (UID: \"76ea6656-5d7e-406b-b36e-3d72b9b0a847\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwbrv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.801224 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec-machine-approver-tls\") pod \"machine-approver-56656f9798-pt2hw\" (UID: \"c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pt2hw" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.801634 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-npnjq"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.801738 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a66fafd0-fa91-4368-8262-88fc7ef86dfa-encryption-config\") pod \"apiserver-7bbb656c7d-kg2xv\" (UID: \"a66fafd0-fa91-4368-8262-88fc7ef86dfa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.801936 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4246fabd-d555-42f3-a01b-fbd9c54c5af1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jg8st\" (UID: \"4246fabd-d555-42f3-a01b-fbd9c54c5af1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jg8st" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.802359 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.802650 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.806742 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-n8kml"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.806935 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-npnjq" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.807488 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723-serving-cert\") pod \"route-controller-manager-6576b87f9c-lk9fd\" (UID: \"1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.809145 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.809313 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-n8kml" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.810241 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nqhlp"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.810448 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.811480 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bhfmd"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.811645 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nqhlp" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.812568 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhfmd" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.813877 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.814304 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.815387 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-w225p"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.816553 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-w225p" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.816935 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.817445 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.818045 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.819008 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dd7wx"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.819632 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dd7wx" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.820081 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5zkr9"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.820991 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5zkr9" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.821248 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.822651 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.822889 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2d7s4"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.823216 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.823383 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2d7s4" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.824689 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-q85bd"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.825642 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b9c4j"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.826532 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b9c4j" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.826904 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525580-5pbww"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.827325 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-5pbww" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.827849 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wmdwc"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.828368 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-wmdwc" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.828864 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-sk46q"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.829798 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.831209 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bwbrv"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.837919 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-77m65"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.839106 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-kjzmh"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.840655 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tdlnc"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.841112 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wfhmg"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.843215 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6lvw2"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.843480 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sp9km"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.845677 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nqhlp"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.848974 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-npnjq"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.851218 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tp6s4"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.851731 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bhfmd"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.852613 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hbjk6"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.853621 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-w225p"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.856839 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bz9b2"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.857881 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-n8kml"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.859189 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qjcds"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.860530 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-bgmxp"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.860732 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qjcds" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.860939 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bgmxp" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.861513 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4jt8"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.862536 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x82g8"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.862791 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.864245 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2brfc"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.865282 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78xq9"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.866250 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-rn5ms"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.867216 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5f4xd"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.868165 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.869112 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dd7wx"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.870276 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.871273 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.872196 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qjcds"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.873092 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5zkr9"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.874081 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b9c4j"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.874930 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.877941 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525580-5pbww"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.882547 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wmdwc"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.883134 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.885234 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2d7s4"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.886683 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-qvvdv"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.887398 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qvvdv" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.887682 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qvvdv"] Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.903400 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.924196 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.942584 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.963581 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 21:01:46 crc kubenswrapper[4886]: I0219 21:01:46.982791 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.003821 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.023579 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.043455 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.062628 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.103043 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.123542 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.142720 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.163469 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.184138 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.204500 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.223602 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.252169 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.263935 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.283876 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.304428 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.324668 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.343314 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.364170 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.384190 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.404601 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.441579 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.443727 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.464005 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.484402 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.533512 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx74p\" (UniqueName: \"kubernetes.io/projected/1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723-kube-api-access-tx74p\") pod \"route-controller-manager-6576b87f9c-lk9fd\" (UID: \"1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.554103 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5lfk\" (UniqueName: \"kubernetes.io/projected/4246fabd-d555-42f3-a01b-fbd9c54c5af1-kube-api-access-r5lfk\") pod \"openshift-apiserver-operator-796bbdcf4f-jg8st\" (UID: \"4246fabd-d555-42f3-a01b-fbd9c54c5af1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jg8st" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.570252 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msj55\" (UniqueName: \"kubernetes.io/projected/a66fafd0-fa91-4368-8262-88fc7ef86dfa-kube-api-access-msj55\") pod \"apiserver-7bbb656c7d-kg2xv\" (UID: \"a66fafd0-fa91-4368-8262-88fc7ef86dfa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.591138 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74f8h\" (UniqueName: \"kubernetes.io/projected/76ea6656-5d7e-406b-b36e-3d72b9b0a847-kube-api-access-74f8h\") pod \"machine-api-operator-5694c8668f-bwbrv\" (UID: \"76ea6656-5d7e-406b-b36e-3d72b9b0a847\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwbrv" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.600783 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.603714 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.612584 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kcx8\" (UniqueName: \"kubernetes.io/projected/c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec-kube-api-access-6kcx8\") pod \"machine-approver-56656f9798-pt2hw\" (UID: \"c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pt2hw" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.624602 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.643517 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.651820 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jg8st" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.662112 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.664569 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.680001 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pt2hw" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.684940 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.706862 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.721752 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bwbrv" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.723586 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.733292 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5dhf\" (UniqueName: \"kubernetes.io/projected/388b84d7-1fa4-4c41-88f3-85a8b836c6a0-kube-api-access-n5dhf\") pod \"openshift-config-operator-7777fb866f-r6ntg\" (UID: \"388b84d7-1fa4-4c41-88f3-85a8b836c6a0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6ntg" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.745005 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.765175 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.783876 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.804052 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.822370 4886 request.go:700] Waited for 1.01170557s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ac-dockercfg-9lkdf&limit=500&resourceVersion=0 Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.824633 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.843813 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.864483 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.887623 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.904010 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.910068 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jg8st"] Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.924040 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 21:01:47 crc kubenswrapper[4886]: W0219 21:01:47.928120 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4246fabd_d555_42f3_a01b_fbd9c54c5af1.slice/crio-cfbf0fbfe19a016fedb406d528beb6bf4c478ee51849e470ac82014938c9dfa0 WatchSource:0}: Error finding container cfbf0fbfe19a016fedb406d528beb6bf4c478ee51849e470ac82014938c9dfa0: Status 404 returned error can't find the container with id cfbf0fbfe19a016fedb406d528beb6bf4c478ee51849e470ac82014938c9dfa0 Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.943387 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.963495 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.969829 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6ntg" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.983313 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 21:01:47 crc kubenswrapper[4886]: I0219 21:01:47.985511 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bwbrv"] Feb 19 21:01:47 crc kubenswrapper[4886]: W0219 21:01:47.992270 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76ea6656_5d7e_406b_b36e_3d72b9b0a847.slice/crio-a8028e61049625eb280afccadd12e374f6ed875b18cb13a0672fe2673ac2c267 WatchSource:0}: Error finding container a8028e61049625eb280afccadd12e374f6ed875b18cb13a0672fe2673ac2c267: Status 404 returned error can't find the container with id a8028e61049625eb280afccadd12e374f6ed875b18cb13a0672fe2673ac2c267 Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.009477 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.022905 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.043639 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.063759 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.083242 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.103066 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.122232 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-r6ntg"] Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.124511 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 21:01:48 crc kubenswrapper[4886]: W0219 21:01:48.138892 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod388b84d7_1fa4_4c41_88f3_85a8b836c6a0.slice/crio-111524b057f3f5c9e4018c3496e046ebc927ddbefa032bf59c66e9b3bbc17ae6 WatchSource:0}: Error finding container 111524b057f3f5c9e4018c3496e046ebc927ddbefa032bf59c66e9b3bbc17ae6: Status 404 returned error can't find the container with id 111524b057f3f5c9e4018c3496e046ebc927ddbefa032bf59c66e9b3bbc17ae6 Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.143273 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.154883 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv"] Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.159048 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd"] Feb 19 21:01:48 crc kubenswrapper[4886]: W0219 21:01:48.161528 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda66fafd0_fa91_4368_8262_88fc7ef86dfa.slice/crio-9e0373f9c80553a2ceb1111ec3f2887b9c5b7b31ec218e80d96829bc5f85b33a WatchSource:0}: Error finding container 9e0373f9c80553a2ceb1111ec3f2887b9c5b7b31ec218e80d96829bc5f85b33a: Status 404 returned error can't find the container with id 9e0373f9c80553a2ceb1111ec3f2887b9c5b7b31ec218e80d96829bc5f85b33a Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.162810 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 21:01:48 crc kubenswrapper[4886]: W0219 21:01:48.166761 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f2075fe_7545_4e4d_bdbf_b9d1fd1d0723.slice/crio-5df2591e59bb5c0e66a6e1b8970f9655cb31518d9e16fa47e7749f7b47263f5b WatchSource:0}: Error finding container 5df2591e59bb5c0e66a6e1b8970f9655cb31518d9e16fa47e7749f7b47263f5b: Status 404 returned error can't find the container with id 5df2591e59bb5c0e66a6e1b8970f9655cb31518d9e16fa47e7749f7b47263f5b Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.183663 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.202523 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.225443 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.244181 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.263519 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.283161 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.303005 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.323962 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.344035 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.364426 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.384723 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.403352 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.424500 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.444438 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.449061 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pt2hw" event={"ID":"c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec","Type":"ContainerStarted","Data":"00a8c07afb5884b9a92e58b067b1a4f6e443d4de8c230cb4f55d5453e4208bc1"} Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.449119 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pt2hw" event={"ID":"c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec","Type":"ContainerStarted","Data":"9e866df668822cd360d3c3ccf01f544b2a4ec2718109c89b329f1af43d6ee1e0"} Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.449136 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pt2hw" event={"ID":"c376fe0e-6e7a-40e9-a8c8-764c2d7dbaec","Type":"ContainerStarted","Data":"912ef44d30b8d550b336ab795e65ea9f21476afd646d5f7cb93d8bdbc673d1cf"} Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.451362 4886 generic.go:334] "Generic (PLEG): container finished" podID="388b84d7-1fa4-4c41-88f3-85a8b836c6a0" containerID="7fa192ecd444a30967c7437b85d3ec65cac3f2a170179ca2e4cd45a512beb973" exitCode=0 Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.451411 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6ntg" event={"ID":"388b84d7-1fa4-4c41-88f3-85a8b836c6a0","Type":"ContainerDied","Data":"7fa192ecd444a30967c7437b85d3ec65cac3f2a170179ca2e4cd45a512beb973"} Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.451428 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6ntg" event={"ID":"388b84d7-1fa4-4c41-88f3-85a8b836c6a0","Type":"ContainerStarted","Data":"111524b057f3f5c9e4018c3496e046ebc927ddbefa032bf59c66e9b3bbc17ae6"} Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.454254 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bwbrv" event={"ID":"76ea6656-5d7e-406b-b36e-3d72b9b0a847","Type":"ContainerStarted","Data":"8535d9938c75d5b77e401d73efb606fef452d974f2bdfc2ce08c0c1003daba69"} Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.454347 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bwbrv" event={"ID":"76ea6656-5d7e-406b-b36e-3d72b9b0a847","Type":"ContainerStarted","Data":"e6e2972089febdd028ec74a383028a38b5b5568360b054029103789d649b2e8a"} Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.454371 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bwbrv" event={"ID":"76ea6656-5d7e-406b-b36e-3d72b9b0a847","Type":"ContainerStarted","Data":"a8028e61049625eb280afccadd12e374f6ed875b18cb13a0672fe2673ac2c267"} Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.457366 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" event={"ID":"a66fafd0-fa91-4368-8262-88fc7ef86dfa","Type":"ContainerStarted","Data":"9e0373f9c80553a2ceb1111ec3f2887b9c5b7b31ec218e80d96829bc5f85b33a"} Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.459477 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jg8st" event={"ID":"4246fabd-d555-42f3-a01b-fbd9c54c5af1","Type":"ContainerStarted","Data":"5bf7a550de5d4c73ae6b56f6deb5192ff9d09d988ac2fbd0e3b34a927af366b3"} Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.459531 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jg8st" event={"ID":"4246fabd-d555-42f3-a01b-fbd9c54c5af1","Type":"ContainerStarted","Data":"cfbf0fbfe19a016fedb406d528beb6bf4c478ee51849e470ac82014938c9dfa0"} Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.462485 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd" event={"ID":"1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723","Type":"ContainerStarted","Data":"3351990ec0250b60867c0176f6e8860ad5ac8a74447a35022d9923cd1147f866"} Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.462530 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd" event={"ID":"1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723","Type":"ContainerStarted","Data":"5df2591e59bb5c0e66a6e1b8970f9655cb31518d9e16fa47e7749f7b47263f5b"} Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.462817 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.463110 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.465336 4886 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-lk9fd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.465394 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd" podUID="1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.488303 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.503984 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.523510 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.543631 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.563110 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.583354 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.605596 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.623861 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.644037 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.664443 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.684879 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.723373 4886 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.743647 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.764441 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.783344 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.803455 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.823558 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.842576 4886 request.go:700] Waited for 1.955019461s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&limit=500&resourceVersion=0 Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.844842 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.863188 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.883584 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.923145 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 21:01:48 crc kubenswrapper[4886]: I0219 21:01:48.943236 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.000734 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e0774fdb-3589-46f5-b263-00664fd69836-etcd-service-ca\") pod \"etcd-operator-b45778765-sp9km\" (UID: \"e0774fdb-3589-46f5-b263-00664fd69836\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sp9km" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.000780 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-serving-cert\") pod \"controller-manager-879f6c89f-q85bd\" (UID: \"14f032d2-80a3-4c8a-a5c8-b82764dc2f18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.000798 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hqp6\" (UniqueName: \"kubernetes.io/projected/d79537f2-b8d8-4f6f-8c38-65701d8c1c77-kube-api-access-4hqp6\") pod \"authentication-operator-69f744f599-x82g8\" (UID: \"d79537f2-b8d8-4f6f-8c38-65701d8c1c77\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.000816 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.000834 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/824c61b2-5c56-4bc5-8462-6582a2ac0465-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l4jt8\" (UID: \"824c61b2-5c56-4bc5-8462-6582a2ac0465\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4jt8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.000851 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.000865 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e0774fdb-3589-46f5-b263-00664fd69836-etcd-client\") pod \"etcd-operator-b45778765-sp9km\" (UID: \"e0774fdb-3589-46f5-b263-00664fd69836\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sp9km" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.000904 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56b95\" (UniqueName: \"kubernetes.io/projected/07134087-479c-4666-a80c-7c50a2a5a005-kube-api-access-56b95\") pod \"dns-operator-744455d44c-kjzmh\" (UID: \"07134087-479c-4666-a80c-7c50a2a5a005\") " pod="openshift-dns-operator/dns-operator-744455d44c-kjzmh" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.000919 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.000947 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d6ed5aa-d2e5-4622-8506-4ef0502af8c2-config\") pod \"console-operator-58897d9998-77m65\" (UID: \"5d6ed5aa-d2e5-4622-8506-4ef0502af8c2\") " pod="openshift-console-operator/console-operator-58897d9998-77m65" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.000964 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d6ed5aa-d2e5-4622-8506-4ef0502af8c2-serving-cert\") pod \"console-operator-58897d9998-77m65\" (UID: \"5d6ed5aa-d2e5-4622-8506-4ef0502af8c2\") " pod="openshift-console-operator/console-operator-58897d9998-77m65" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.000990 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mkq9\" (UniqueName: \"kubernetes.io/projected/5d6ed5aa-d2e5-4622-8506-4ef0502af8c2-kube-api-access-5mkq9\") pod \"console-operator-58897d9998-77m65\" (UID: \"5d6ed5aa-d2e5-4622-8506-4ef0502af8c2\") " pod="openshift-console-operator/console-operator-58897d9998-77m65" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001007 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb0c25ff-7e97-493b-8e57-e25312c1403b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hbjk6\" (UID: \"fb0c25ff-7e97-493b-8e57-e25312c1403b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hbjk6" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001025 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0774fdb-3589-46f5-b263-00664fd69836-serving-cert\") pod \"etcd-operator-b45778765-sp9km\" (UID: \"e0774fdb-3589-46f5-b263-00664fd69836\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sp9km" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001071 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/81e09dc2-360c-45f3-ae38-1d40f0345fcb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tdlnc\" (UID: \"81e09dc2-360c-45f3-ae38-1d40f0345fcb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tdlnc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001086 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7013b72c-2c60-4174-b7e9-a62de8263d50-bound-sa-token\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001101 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d79537f2-b8d8-4f6f-8c38-65701d8c1c77-service-ca-bundle\") pod \"authentication-operator-69f744f599-x82g8\" (UID: \"d79537f2-b8d8-4f6f-8c38-65701d8c1c77\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001141 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fc9ae95-b72d-42a3-943d-30c652843b61-config\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001157 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d03f4c56-f429-4911-814d-02610d24f7ec-console-config\") pod \"console-f9d7485db-rn5ms\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001173 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7013b72c-2c60-4174-b7e9-a62de8263d50-registry-tls\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001205 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001223 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d03f4c56-f429-4911-814d-02610d24f7ec-trusted-ca-bundle\") pod \"console-f9d7485db-rn5ms\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001240 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7013b72c-2c60-4174-b7e9-a62de8263d50-ca-trust-extracted\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001256 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d79537f2-b8d8-4f6f-8c38-65701d8c1c77-config\") pod \"authentication-operator-69f744f599-x82g8\" (UID: \"d79537f2-b8d8-4f6f-8c38-65701d8c1c77\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001305 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk4q2\" (UniqueName: \"kubernetes.io/projected/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-kube-api-access-kk4q2\") pod \"controller-manager-879f6c89f-q85bd\" (UID: \"14f032d2-80a3-4c8a-a5c8-b82764dc2f18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001322 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2cjx\" (UniqueName: \"kubernetes.io/projected/b77d3d20-a193-4bf3-a448-e48059491a85-kube-api-access-r2cjx\") pod \"downloads-7954f5f757-6lvw2\" (UID: \"b77d3d20-a193-4bf3-a448-e48059491a85\") " pod="openshift-console/downloads-7954f5f757-6lvw2" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001338 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8fc9ae95-b72d-42a3-943d-30c652843b61-audit\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001353 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001377 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8fc9ae95-b72d-42a3-943d-30c652843b61-etcd-client\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001410 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-q85bd\" (UID: \"14f032d2-80a3-4c8a-a5c8-b82764dc2f18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001426 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d03f4c56-f429-4911-814d-02610d24f7ec-console-serving-cert\") pod \"console-f9d7485db-rn5ms\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001460 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7013b72c-2c60-4174-b7e9-a62de8263d50-registry-certificates\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001474 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7013b72c-2c60-4174-b7e9-a62de8263d50-installation-pull-secrets\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001509 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001542 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001559 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb0c25ff-7e97-493b-8e57-e25312c1403b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hbjk6\" (UID: \"fb0c25ff-7e97-493b-8e57-e25312c1403b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hbjk6" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001575 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/07134087-479c-4666-a80c-7c50a2a5a005-metrics-tls\") pod \"dns-operator-744455d44c-kjzmh\" (UID: \"07134087-479c-4666-a80c-7c50a2a5a005\") " pod="openshift-dns-operator/dns-operator-744455d44c-kjzmh" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001589 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8fc9ae95-b72d-42a3-943d-30c652843b61-etcd-serving-ca\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001606 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e42dbc11-eae0-4ed9-a653-304ed853ada3-audit-policies\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001629 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxqsd\" (UniqueName: \"kubernetes.io/projected/e42dbc11-eae0-4ed9-a653-304ed853ada3-kube-api-access-xxqsd\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001653 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d6ed5aa-d2e5-4622-8506-4ef0502af8c2-trusted-ca\") pod \"console-operator-58897d9998-77m65\" (UID: \"5d6ed5aa-d2e5-4622-8506-4ef0502af8c2\") " pod="openshift-console-operator/console-operator-58897d9998-77m65" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001668 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d79537f2-b8d8-4f6f-8c38-65701d8c1c77-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x82g8\" (UID: \"d79537f2-b8d8-4f6f-8c38-65701d8c1c77\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001683 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fc9ae95-b72d-42a3-943d-30c652843b61-trusted-ca-bundle\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001711 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001726 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d03f4c56-f429-4911-814d-02610d24f7ec-console-oauth-config\") pod \"console-f9d7485db-rn5ms\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001753 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e42dbc11-eae0-4ed9-a653-304ed853ada3-audit-dir\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001769 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb0c25ff-7e97-493b-8e57-e25312c1403b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hbjk6\" (UID: \"fb0c25ff-7e97-493b-8e57-e25312c1403b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hbjk6" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001792 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-client-ca\") pod \"controller-manager-879f6c89f-q85bd\" (UID: \"14f032d2-80a3-4c8a-a5c8-b82764dc2f18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001806 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8fc9ae95-b72d-42a3-943d-30c652843b61-audit-dir\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001821 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001836 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001852 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb2k9\" (UniqueName: \"kubernetes.io/projected/7013b72c-2c60-4174-b7e9-a62de8263d50-kube-api-access-zb2k9\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001868 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d79537f2-b8d8-4f6f-8c38-65701d8c1c77-serving-cert\") pod \"authentication-operator-69f744f599-x82g8\" (UID: \"d79537f2-b8d8-4f6f-8c38-65701d8c1c77\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001890 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8fc9ae95-b72d-42a3-943d-30c652843b61-image-import-ca\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001903 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fc9ae95-b72d-42a3-943d-30c652843b61-serving-cert\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001918 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0774fdb-3589-46f5-b263-00664fd69836-config\") pod \"etcd-operator-b45778765-sp9km\" (UID: \"e0774fdb-3589-46f5-b263-00664fd69836\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sp9km" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001932 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d03f4c56-f429-4911-814d-02610d24f7ec-oauth-serving-cert\") pod \"console-f9d7485db-rn5ms\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001948 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001984 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.001999 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htgjh\" (UniqueName: \"kubernetes.io/projected/fb0c25ff-7e97-493b-8e57-e25312c1403b-kube-api-access-htgjh\") pod \"cluster-image-registry-operator-dc59b4c8b-hbjk6\" (UID: \"fb0c25ff-7e97-493b-8e57-e25312c1403b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hbjk6" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.002013 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e0774fdb-3589-46f5-b263-00664fd69836-etcd-ca\") pod \"etcd-operator-b45778765-sp9km\" (UID: \"e0774fdb-3589-46f5-b263-00664fd69836\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sp9km" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.002037 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7013b72c-2c60-4174-b7e9-a62de8263d50-trusted-ca\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.002051 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8fc9ae95-b72d-42a3-943d-30c652843b61-node-pullsecrets\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.002066 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s2gr\" (UniqueName: \"kubernetes.io/projected/8fc9ae95-b72d-42a3-943d-30c652843b61-kube-api-access-6s2gr\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.002083 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d03f4c56-f429-4911-814d-02610d24f7ec-service-ca\") pod \"console-f9d7485db-rn5ms\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.002100 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-config\") pod \"controller-manager-879f6c89f-q85bd\" (UID: \"14f032d2-80a3-4c8a-a5c8-b82764dc2f18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.002114 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/824c61b2-5c56-4bc5-8462-6582a2ac0465-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l4jt8\" (UID: \"824c61b2-5c56-4bc5-8462-6582a2ac0465\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4jt8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.002131 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6rlp\" (UniqueName: \"kubernetes.io/projected/824c61b2-5c56-4bc5-8462-6582a2ac0465-kube-api-access-k6rlp\") pod \"openshift-controller-manager-operator-756b6f6bc6-l4jt8\" (UID: \"824c61b2-5c56-4bc5-8462-6582a2ac0465\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4jt8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.002146 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq8cl\" (UniqueName: \"kubernetes.io/projected/e0774fdb-3589-46f5-b263-00664fd69836-kube-api-access-tq8cl\") pod \"etcd-operator-b45778765-sp9km\" (UID: \"e0774fdb-3589-46f5-b263-00664fd69836\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sp9km" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.002160 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8fc9ae95-b72d-42a3-943d-30c652843b61-encryption-config\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.002186 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj95l\" (UniqueName: \"kubernetes.io/projected/d03f4c56-f429-4911-814d-02610d24f7ec-kube-api-access-gj95l\") pod \"console-f9d7485db-rn5ms\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.002213 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqgx8\" (UniqueName: \"kubernetes.io/projected/81e09dc2-360c-45f3-ae38-1d40f0345fcb-kube-api-access-pqgx8\") pod \"cluster-samples-operator-665b6dd947-tdlnc\" (UID: \"81e09dc2-360c-45f3-ae38-1d40f0345fcb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tdlnc" Feb 19 21:01:49 crc kubenswrapper[4886]: E0219 21:01:49.007353 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:49.506286282 +0000 UTC m=+140.134129462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.103573 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.104091 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d79537f2-b8d8-4f6f-8c38-65701d8c1c77-serving-cert\") pod \"authentication-operator-69f744f599-x82g8\" (UID: \"d79537f2-b8d8-4f6f-8c38-65701d8c1c77\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.104318 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c7870ee6-dc66-4746-9a0a-481377abc8a7-node-bootstrap-token\") pod \"machine-config-server-bgmxp\" (UID: \"c7870ee6-dc66-4746-9a0a-481377abc8a7\") " pod="openshift-machine-config-operator/machine-config-server-bgmxp" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.104524 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8fc9ae95-b72d-42a3-943d-30c652843b61-image-import-ca\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: E0219 21:01:49.104651 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:49.60460107 +0000 UTC m=+140.232444120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.104848 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fc9ae95-b72d-42a3-943d-30c652843b61-serving-cert\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.105018 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b171410-3c97-4589-b62d-2190a13cbb3e-service-ca-bundle\") pod \"router-default-5444994796-2wz7p\" (UID: \"6b171410-3c97-4589-b62d-2190a13cbb3e\") " pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.105177 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f21bc18b-845c-491a-8d27-4cdb035e26bc-socket-dir\") pod \"csi-hostpathplugin-qjcds\" (UID: \"f21bc18b-845c-491a-8d27-4cdb035e26bc\") " pod="hostpath-provisioner/csi-hostpathplugin-qjcds" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.105358 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d03f4c56-f429-4911-814d-02610d24f7ec-oauth-serving-cert\") pod \"console-f9d7485db-rn5ms\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.105511 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.105661 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.105847 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7013b72c-2c60-4174-b7e9-a62de8263d50-trusted-ca\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.105998 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s2gr\" (UniqueName: \"kubernetes.io/projected/8fc9ae95-b72d-42a3-943d-30c652843b61-kube-api-access-6s2gr\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.106145 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8fc9ae95-b72d-42a3-943d-30c652843b61-image-import-ca\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.106158 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d03f4c56-f429-4911-814d-02610d24f7ec-service-ca\") pod \"console-f9d7485db-rn5ms\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.106287 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f21bc18b-845c-491a-8d27-4cdb035e26bc-mountpoint-dir\") pod \"csi-hostpathplugin-qjcds\" (UID: \"f21bc18b-845c-491a-8d27-4cdb035e26bc\") " pod="hostpath-provisioner/csi-hostpathplugin-qjcds" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.106345 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/824c61b2-5c56-4bc5-8462-6582a2ac0465-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l4jt8\" (UID: \"824c61b2-5c56-4bc5-8462-6582a2ac0465\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4jt8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.106382 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc9kr\" (UniqueName: \"kubernetes.io/projected/8fca7e4e-2791-4d94-b4c5-efc53888b49f-kube-api-access-gc9kr\") pod \"ingress-canary-w225p\" (UID: \"8fca7e4e-2791-4d94-b4c5-efc53888b49f\") " pod="openshift-ingress-canary/ingress-canary-w225p" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.106447 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6rlp\" (UniqueName: \"kubernetes.io/projected/824c61b2-5c56-4bc5-8462-6582a2ac0465-kube-api-access-k6rlp\") pod \"openshift-controller-manager-operator-756b6f6bc6-l4jt8\" (UID: \"824c61b2-5c56-4bc5-8462-6582a2ac0465\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4jt8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.106488 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/67f954f3-df5a-4b74-b54e-1b29fee2a572-images\") pod \"machine-config-operator-74547568cd-dd7wx\" (UID: \"67f954f3-df5a-4b74-b54e-1b29fee2a572\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dd7wx" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.106523 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2425979c-a518-4bd1-9289-032b9f57e016-metrics-tls\") pod \"dns-default-qvvdv\" (UID: \"2425979c-a518-4bd1-9289-032b9f57e016\") " pod="openshift-dns/dns-default-qvvdv" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.106562 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8fc9ae95-b72d-42a3-943d-30c652843b61-encryption-config\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.106595 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj95l\" (UniqueName: \"kubernetes.io/projected/d03f4c56-f429-4911-814d-02610d24f7ec-kube-api-access-gj95l\") pod \"console-f9d7485db-rn5ms\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.106631 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvmmr\" (UniqueName: \"kubernetes.io/projected/67f954f3-df5a-4b74-b54e-1b29fee2a572-kube-api-access-nvmmr\") pod \"machine-config-operator-74547568cd-dd7wx\" (UID: \"67f954f3-df5a-4b74-b54e-1b29fee2a572\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dd7wx" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.106666 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3260efb9-ca7f-4ad6-af84-4302737e8250-config\") pod \"service-ca-operator-777779d784-2d7s4\" (UID: \"3260efb9-ca7f-4ad6-af84-4302737e8250\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2d7s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.106703 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hqp6\" (UniqueName: \"kubernetes.io/projected/d79537f2-b8d8-4f6f-8c38-65701d8c1c77-kube-api-access-4hqp6\") pod \"authentication-operator-69f744f599-x82g8\" (UID: \"d79537f2-b8d8-4f6f-8c38-65701d8c1c77\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.106742 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.106779 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wsq2\" (UniqueName: \"kubernetes.io/projected/49e472dc-470f-43d1-8fb8-31e53a1a1c40-kube-api-access-8wsq2\") pod \"control-plane-machine-set-operator-78cbb6b69f-b9c4j\" (UID: \"49e472dc-470f-43d1-8fb8-31e53a1a1c40\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b9c4j" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.106815 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7675fc04-f59f-4fc8-9bd2-e4a67bdba66a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-wfhmg\" (UID: \"7675fc04-f59f-4fc8-9bd2-e4a67bdba66a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wfhmg" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.106880 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56b95\" (UniqueName: \"kubernetes.io/projected/07134087-479c-4666-a80c-7c50a2a5a005-kube-api-access-56b95\") pod \"dns-operator-744455d44c-kjzmh\" (UID: \"07134087-479c-4666-a80c-7c50a2a5a005\") " pod="openshift-dns-operator/dns-operator-744455d44c-kjzmh" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.106916 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/824c61b2-5c56-4bc5-8462-6582a2ac0465-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l4jt8\" (UID: \"824c61b2-5c56-4bc5-8462-6582a2ac0465\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4jt8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.106949 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e0774fdb-3589-46f5-b263-00664fd69836-etcd-client\") pod \"etcd-operator-b45778765-sp9km\" (UID: \"e0774fdb-3589-46f5-b263-00664fd69836\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sp9km" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.106984 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107017 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dbb72093-2c40-4a92-b1bf-18d8175fb1c8-tmpfs\") pod \"packageserver-d55dfcdfc-r54js\" (UID: \"dbb72093-2c40-4a92-b1bf-18d8175fb1c8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107055 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn9vf\" (UniqueName: \"kubernetes.io/projected/8b5c741c-61db-4b35-997b-8edd406b5b01-kube-api-access-qn9vf\") pod \"marketplace-operator-79b997595-nqhlp\" (UID: \"8b5c741c-61db-4b35-997b-8edd406b5b01\") " pod="openshift-marketplace/marketplace-operator-79b997595-nqhlp" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107086 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0fd9b417-e431-4afc-b6a7-5c269fa04171-srv-cert\") pod \"olm-operator-6b444d44fb-t8hr5\" (UID: \"0fd9b417-e431-4afc-b6a7-5c269fa04171\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107130 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb0c25ff-7e97-493b-8e57-e25312c1403b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hbjk6\" (UID: \"fb0c25ff-7e97-493b-8e57-e25312c1403b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hbjk6" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107163 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0774fdb-3589-46f5-b263-00664fd69836-serving-cert\") pod \"etcd-operator-b45778765-sp9km\" (UID: \"e0774fdb-3589-46f5-b263-00664fd69836\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sp9km" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107202 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/81e09dc2-360c-45f3-ae38-1d40f0345fcb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tdlnc\" (UID: \"81e09dc2-360c-45f3-ae38-1d40f0345fcb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tdlnc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107246 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f21bc18b-845c-491a-8d27-4cdb035e26bc-csi-data-dir\") pod \"csi-hostpathplugin-qjcds\" (UID: \"f21bc18b-845c-491a-8d27-4cdb035e26bc\") " pod="hostpath-provisioner/csi-hostpathplugin-qjcds" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107307 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/762d12a4-6d88-4715-923e-916dfc4ecad3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-l7kln\" (UID: \"762d12a4-6d88-4715-923e-916dfc4ecad3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107346 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/202fcc8c-4e14-4336-aaef-22f33ff09ece-config-volume\") pod \"collect-profiles-29525580-5pbww\" (UID: \"202fcc8c-4e14-4336-aaef-22f33ff09ece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-5pbww" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107377 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/646ec78e-4771-472a-8d9e-5356a4799f59-proxy-tls\") pod \"machine-config-controller-84d6567774-bhfmd\" (UID: \"646ec78e-4771-472a-8d9e-5356a4799f59\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhfmd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107409 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21e66e32-ffac-421f-ab01-252cb8d4e589-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2brfc\" (UID: \"21e66e32-ffac-421f-ab01-252cb8d4e589\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2brfc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107447 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7013b72c-2c60-4174-b7e9-a62de8263d50-bound-sa-token\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107480 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d79537f2-b8d8-4f6f-8c38-65701d8c1c77-service-ca-bundle\") pod \"authentication-operator-69f744f599-x82g8\" (UID: \"d79537f2-b8d8-4f6f-8c38-65701d8c1c77\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107506 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6b171410-3c97-4589-b62d-2190a13cbb3e-stats-auth\") pod \"router-default-5444994796-2wz7p\" (UID: \"6b171410-3c97-4589-b62d-2190a13cbb3e\") " pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107577 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn8tx\" (UniqueName: \"kubernetes.io/projected/f21bc18b-845c-491a-8d27-4cdb035e26bc-kube-api-access-hn8tx\") pod \"csi-hostpathplugin-qjcds\" (UID: \"f21bc18b-845c-491a-8d27-4cdb035e26bc\") " pod="hostpath-provisioner/csi-hostpathplugin-qjcds" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107617 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fc9ae95-b72d-42a3-943d-30c652843b61-config\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107655 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7013b72c-2c60-4174-b7e9-a62de8263d50-registry-tls\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107690 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hhtn\" (UniqueName: \"kubernetes.io/projected/dbb72093-2c40-4a92-b1bf-18d8175fb1c8-kube-api-access-7hhtn\") pod \"packageserver-d55dfcdfc-r54js\" (UID: \"dbb72093-2c40-4a92-b1bf-18d8175fb1c8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107722 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fbpb\" (UniqueName: \"kubernetes.io/projected/c7870ee6-dc66-4746-9a0a-481377abc8a7-kube-api-access-4fbpb\") pod \"machine-config-server-bgmxp\" (UID: \"c7870ee6-dc66-4746-9a0a-481377abc8a7\") " pod="openshift-machine-config-operator/machine-config-server-bgmxp" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107759 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7013b72c-2c60-4174-b7e9-a62de8263d50-ca-trust-extracted\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107793 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d03f4c56-f429-4911-814d-02610d24f7ec-trusted-ca-bundle\") pod \"console-f9d7485db-rn5ms\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107829 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bppx\" (UniqueName: \"kubernetes.io/projected/19ce12ce-88ca-4e70-bce7-c87d5d064955-kube-api-access-8bppx\") pod \"multus-admission-controller-857f4d67dd-n8kml\" (UID: \"19ce12ce-88ca-4e70-bce7-c87d5d064955\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n8kml" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107875 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107913 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8fc9ae95-b72d-42a3-943d-30c652843b61-etcd-client\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107947 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dbb72093-2c40-4a92-b1bf-18d8175fb1c8-apiservice-cert\") pod \"packageserver-d55dfcdfc-r54js\" (UID: \"dbb72093-2c40-4a92-b1bf-18d8175fb1c8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.107982 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7675fc04-f59f-4fc8-9bd2-e4a67bdba66a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-wfhmg\" (UID: \"7675fc04-f59f-4fc8-9bd2-e4a67bdba66a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wfhmg" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108009 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49k4m\" (UniqueName: \"kubernetes.io/projected/7675fc04-f59f-4fc8-9bd2-e4a67bdba66a-kube-api-access-49k4m\") pod \"kube-storage-version-migrator-operator-b67b599dd-wfhmg\" (UID: \"7675fc04-f59f-4fc8-9bd2-e4a67bdba66a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wfhmg" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108033 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgz4k\" (UniqueName: \"kubernetes.io/projected/202fcc8c-4e14-4336-aaef-22f33ff09ece-kube-api-access-fgz4k\") pod \"collect-profiles-29525580-5pbww\" (UID: \"202fcc8c-4e14-4336-aaef-22f33ff09ece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-5pbww" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108059 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ea23ad9d-a037-4f7c-bf28-0e7c27fe0a2d-signing-cabundle\") pod \"service-ca-9c57cc56f-wmdwc\" (UID: \"ea23ad9d-a037-4f7c-bf28-0e7c27fe0a2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-wmdwc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108081 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/67f954f3-df5a-4b74-b54e-1b29fee2a572-proxy-tls\") pod \"machine-config-operator-74547568cd-dd7wx\" (UID: \"67f954f3-df5a-4b74-b54e-1b29fee2a572\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dd7wx" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108103 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbb72093-2c40-4a92-b1bf-18d8175fb1c8-webhook-cert\") pod \"packageserver-d55dfcdfc-r54js\" (UID: \"dbb72093-2c40-4a92-b1bf-18d8175fb1c8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108127 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmmml\" (UniqueName: \"kubernetes.io/projected/0fd9b417-e431-4afc-b6a7-5c269fa04171-kube-api-access-zmmml\") pod \"olm-operator-6b444d44fb-t8hr5\" (UID: \"0fd9b417-e431-4afc-b6a7-5c269fa04171\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108172 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7013b72c-2c60-4174-b7e9-a62de8263d50-installation-pull-secrets\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108219 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fc9ae95-b72d-42a3-943d-30c652843b61-trusted-ca-bundle\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108245 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0fd9b417-e431-4afc-b6a7-5c269fa04171-profile-collector-cert\") pod \"olm-operator-6b444d44fb-t8hr5\" (UID: \"0fd9b417-e431-4afc-b6a7-5c269fa04171\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108306 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8fc9ae95-b72d-42a3-943d-30c652843b61-audit-dir\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108335 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb2k9\" (UniqueName: \"kubernetes.io/projected/7013b72c-2c60-4174-b7e9-a62de8263d50-kube-api-access-zb2k9\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108362 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108397 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b171410-3c97-4589-b62d-2190a13cbb3e-metrics-certs\") pod \"router-default-5444994796-2wz7p\" (UID: \"6b171410-3c97-4589-b62d-2190a13cbb3e\") " pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108426 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0774fdb-3589-46f5-b263-00664fd69836-config\") pod \"etcd-operator-b45778765-sp9km\" (UID: \"e0774fdb-3589-46f5-b263-00664fd69836\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sp9km" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108451 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/22d58380-a045-498b-aa0e-d07a603210ff-srv-cert\") pod \"catalog-operator-68c6474976-s659c\" (UID: \"22d58380-a045-498b-aa0e-d07a603210ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108475 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b445fc1-17c9-4bae-9641-6dc5388ddb28-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-78xq9\" (UID: \"5b445fc1-17c9-4bae-9641-6dc5388ddb28\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78xq9" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108509 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8b5c741c-61db-4b35-997b-8edd406b5b01-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nqhlp\" (UID: \"8b5c741c-61db-4b35-997b-8edd406b5b01\") " pod="openshift-marketplace/marketplace-operator-79b997595-nqhlp" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108535 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/646ec78e-4771-472a-8d9e-5356a4799f59-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bhfmd\" (UID: \"646ec78e-4771-472a-8d9e-5356a4799f59\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhfmd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108556 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c7870ee6-dc66-4746-9a0a-481377abc8a7-certs\") pod \"machine-config-server-bgmxp\" (UID: \"c7870ee6-dc66-4746-9a0a-481377abc8a7\") " pod="openshift-machine-config-operator/machine-config-server-bgmxp" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108578 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b445fc1-17c9-4bae-9641-6dc5388ddb28-config\") pod \"kube-apiserver-operator-766d6c64bb-78xq9\" (UID: \"5b445fc1-17c9-4bae-9641-6dc5388ddb28\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78xq9" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108602 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np5l2\" (UniqueName: \"kubernetes.io/projected/3260efb9-ca7f-4ad6-af84-4302737e8250-kube-api-access-np5l2\") pod \"service-ca-operator-777779d784-2d7s4\" (UID: \"3260efb9-ca7f-4ad6-af84-4302737e8250\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2d7s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108633 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8fc9ae95-b72d-42a3-943d-30c652843b61-node-pullsecrets\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108657 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htgjh\" (UniqueName: \"kubernetes.io/projected/fb0c25ff-7e97-493b-8e57-e25312c1403b-kube-api-access-htgjh\") pod \"cluster-image-registry-operator-dc59b4c8b-hbjk6\" (UID: \"fb0c25ff-7e97-493b-8e57-e25312c1403b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hbjk6" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108680 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e0774fdb-3589-46f5-b263-00664fd69836-etcd-ca\") pod \"etcd-operator-b45778765-sp9km\" (UID: \"e0774fdb-3589-46f5-b263-00664fd69836\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sp9km" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108705 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjv6r\" (UniqueName: \"kubernetes.io/projected/22d58380-a045-498b-aa0e-d07a603210ff-kube-api-access-qjv6r\") pod \"catalog-operator-68c6474976-s659c\" (UID: \"22d58380-a045-498b-aa0e-d07a603210ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108728 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63178e9b-177f-4dd9-ac6c-e54c48264262-metrics-tls\") pod \"ingress-operator-5b745b69d9-bz9b2\" (UID: \"63178e9b-177f-4dd9-ac6c-e54c48264262\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bz9b2" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108757 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-config\") pod \"controller-manager-879f6c89f-q85bd\" (UID: \"14f032d2-80a3-4c8a-a5c8-b82764dc2f18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108779 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3260efb9-ca7f-4ad6-af84-4302737e8250-serving-cert\") pod \"service-ca-operator-777779d784-2d7s4\" (UID: \"3260efb9-ca7f-4ad6-af84-4302737e8250\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2d7s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108803 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq8cl\" (UniqueName: \"kubernetes.io/projected/e0774fdb-3589-46f5-b263-00664fd69836-kube-api-access-tq8cl\") pod \"etcd-operator-b45778765-sp9km\" (UID: \"e0774fdb-3589-46f5-b263-00664fd69836\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sp9km" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108833 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqgx8\" (UniqueName: \"kubernetes.io/projected/81e09dc2-360c-45f3-ae38-1d40f0345fcb-kube-api-access-pqgx8\") pod \"cluster-samples-operator-665b6dd947-tdlnc\" (UID: \"81e09dc2-360c-45f3-ae38-1d40f0345fcb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tdlnc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108858 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc2hx\" (UniqueName: \"kubernetes.io/projected/a9ba9ff9-e1ed-4a23-83f8-7125da8d1c36-kube-api-access-dc2hx\") pod \"migrator-59844c95c7-5zkr9\" (UID: \"a9ba9ff9-e1ed-4a23-83f8-7125da8d1c36\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5zkr9" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108884 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-serving-cert\") pod \"controller-manager-879f6c89f-q85bd\" (UID: \"14f032d2-80a3-4c8a-a5c8-b82764dc2f18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108911 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e0774fdb-3589-46f5-b263-00664fd69836-etcd-service-ca\") pod \"etcd-operator-b45778765-sp9km\" (UID: \"e0774fdb-3589-46f5-b263-00664fd69836\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sp9km" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108942 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/63178e9b-177f-4dd9-ac6c-e54c48264262-trusted-ca\") pod \"ingress-operator-5b745b69d9-bz9b2\" (UID: \"63178e9b-177f-4dd9-ac6c-e54c48264262\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bz9b2" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108967 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2425979c-a518-4bd1-9289-032b9f57e016-config-volume\") pod \"dns-default-qvvdv\" (UID: \"2425979c-a518-4bd1-9289-032b9f57e016\") " pod="openshift-dns/dns-default-qvvdv" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.108993 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109016 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/202fcc8c-4e14-4336-aaef-22f33ff09ece-secret-volume\") pod \"collect-profiles-29525580-5pbww\" (UID: \"202fcc8c-4e14-4336-aaef-22f33ff09ece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-5pbww" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109042 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e66e32-ffac-421f-ab01-252cb8d4e589-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2brfc\" (UID: \"21e66e32-ffac-421f-ab01-252cb8d4e589\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2brfc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109072 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d6ed5aa-d2e5-4622-8506-4ef0502af8c2-config\") pod \"console-operator-58897d9998-77m65\" (UID: \"5d6ed5aa-d2e5-4622-8506-4ef0502af8c2\") " pod="openshift-console-operator/console-operator-58897d9998-77m65" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109096 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/49e472dc-470f-43d1-8fb8-31e53a1a1c40-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-b9c4j\" (UID: \"49e472dc-470f-43d1-8fb8-31e53a1a1c40\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b9c4j" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109123 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d6ed5aa-d2e5-4622-8506-4ef0502af8c2-serving-cert\") pod \"console-operator-58897d9998-77m65\" (UID: \"5d6ed5aa-d2e5-4622-8506-4ef0502af8c2\") " pod="openshift-console-operator/console-operator-58897d9998-77m65" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109151 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mkq9\" (UniqueName: \"kubernetes.io/projected/5d6ed5aa-d2e5-4622-8506-4ef0502af8c2-kube-api-access-5mkq9\") pod \"console-operator-58897d9998-77m65\" (UID: \"5d6ed5aa-d2e5-4622-8506-4ef0502af8c2\") " pod="openshift-console-operator/console-operator-58897d9998-77m65" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109177 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6b171410-3c97-4589-b62d-2190a13cbb3e-default-certificate\") pod \"router-default-5444994796-2wz7p\" (UID: \"6b171410-3c97-4589-b62d-2190a13cbb3e\") " pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109201 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2669k\" (UniqueName: \"kubernetes.io/projected/6b171410-3c97-4589-b62d-2190a13cbb3e-kube-api-access-2669k\") pod \"router-default-5444994796-2wz7p\" (UID: \"6b171410-3c97-4589-b62d-2190a13cbb3e\") " pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109223 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21e66e32-ffac-421f-ab01-252cb8d4e589-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2brfc\" (UID: \"21e66e32-ffac-421f-ab01-252cb8d4e589\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2brfc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109246 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6k7d\" (UniqueName: \"kubernetes.io/projected/63178e9b-177f-4dd9-ac6c-e54c48264262-kube-api-access-w6k7d\") pod \"ingress-operator-5b745b69d9-bz9b2\" (UID: \"63178e9b-177f-4dd9-ac6c-e54c48264262\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bz9b2" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109298 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d03f4c56-f429-4911-814d-02610d24f7ec-console-config\") pod \"console-f9d7485db-rn5ms\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109324 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f21bc18b-845c-491a-8d27-4cdb035e26bc-registration-dir\") pod \"csi-hostpathplugin-qjcds\" (UID: \"f21bc18b-845c-491a-8d27-4cdb035e26bc\") " pod="hostpath-provisioner/csi-hostpathplugin-qjcds" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109350 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109375 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f21bc18b-845c-491a-8d27-4cdb035e26bc-plugins-dir\") pod \"csi-hostpathplugin-qjcds\" (UID: \"f21bc18b-845c-491a-8d27-4cdb035e26bc\") " pod="hostpath-provisioner/csi-hostpathplugin-qjcds" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109403 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d79537f2-b8d8-4f6f-8c38-65701d8c1c77-config\") pod \"authentication-operator-69f744f599-x82g8\" (UID: \"d79537f2-b8d8-4f6f-8c38-65701d8c1c77\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109429 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a137242-a113-4f50-b83f-38cce1ae8ca2-config\") pod \"kube-controller-manager-operator-78b949d7b-npnjq\" (UID: \"1a137242-a113-4f50-b83f-38cce1ae8ca2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-npnjq" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109456 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/22d58380-a045-498b-aa0e-d07a603210ff-profile-collector-cert\") pod \"catalog-operator-68c6474976-s659c\" (UID: \"22d58380-a045-498b-aa0e-d07a603210ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109483 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a137242-a113-4f50-b83f-38cce1ae8ca2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-npnjq\" (UID: \"1a137242-a113-4f50-b83f-38cce1ae8ca2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-npnjq" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109508 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk4q2\" (UniqueName: \"kubernetes.io/projected/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-kube-api-access-kk4q2\") pod \"controller-manager-879f6c89f-q85bd\" (UID: \"14f032d2-80a3-4c8a-a5c8-b82764dc2f18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109531 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2cjx\" (UniqueName: \"kubernetes.io/projected/b77d3d20-a193-4bf3-a448-e48059491a85-kube-api-access-r2cjx\") pod \"downloads-7954f5f757-6lvw2\" (UID: \"b77d3d20-a193-4bf3-a448-e48059491a85\") " pod="openshift-console/downloads-7954f5f757-6lvw2" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109553 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8fc9ae95-b72d-42a3-943d-30c652843b61-audit\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109580 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b5c741c-61db-4b35-997b-8edd406b5b01-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nqhlp\" (UID: \"8b5c741c-61db-4b35-997b-8edd406b5b01\") " pod="openshift-marketplace/marketplace-operator-79b997595-nqhlp" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109605 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-q85bd\" (UID: \"14f032d2-80a3-4c8a-a5c8-b82764dc2f18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109629 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d03f4c56-f429-4911-814d-02610d24f7ec-console-serving-cert\") pod \"console-f9d7485db-rn5ms\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109653 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ea23ad9d-a037-4f7c-bf28-0e7c27fe0a2d-signing-key\") pod \"service-ca-9c57cc56f-wmdwc\" (UID: \"ea23ad9d-a037-4f7c-bf28-0e7c27fe0a2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-wmdwc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109679 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46wmt\" (UniqueName: \"kubernetes.io/projected/646ec78e-4771-472a-8d9e-5356a4799f59-kube-api-access-46wmt\") pod \"machine-config-controller-84d6567774-bhfmd\" (UID: \"646ec78e-4771-472a-8d9e-5356a4799f59\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhfmd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109702 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw765\" (UniqueName: \"kubernetes.io/projected/762d12a4-6d88-4715-923e-916dfc4ecad3-kube-api-access-fw765\") pod \"package-server-manager-789f6589d5-l7kln\" (UID: \"762d12a4-6d88-4715-923e-916dfc4ecad3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109727 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7013b72c-2c60-4174-b7e9-a62de8263d50-registry-certificates\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109751 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109775 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67f954f3-df5a-4b74-b54e-1b29fee2a572-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dd7wx\" (UID: \"67f954f3-df5a-4b74-b54e-1b29fee2a572\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dd7wx" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109801 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb0c25ff-7e97-493b-8e57-e25312c1403b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hbjk6\" (UID: \"fb0c25ff-7e97-493b-8e57-e25312c1403b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hbjk6" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109824 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/63178e9b-177f-4dd9-ac6c-e54c48264262-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bz9b2\" (UID: \"63178e9b-177f-4dd9-ac6c-e54c48264262\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bz9b2" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109846 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/19ce12ce-88ca-4e70-bce7-c87d5d064955-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-n8kml\" (UID: \"19ce12ce-88ca-4e70-bce7-c87d5d064955\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n8kml" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109875 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109899 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/07134087-479c-4666-a80c-7c50a2a5a005-metrics-tls\") pod \"dns-operator-744455d44c-kjzmh\" (UID: \"07134087-479c-4666-a80c-7c50a2a5a005\") " pod="openshift-dns-operator/dns-operator-744455d44c-kjzmh" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109931 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8fc9ae95-b72d-42a3-943d-30c652843b61-etcd-serving-ca\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109959 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e42dbc11-eae0-4ed9-a653-304ed853ada3-audit-policies\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109992 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxqsd\" (UniqueName: \"kubernetes.io/projected/e42dbc11-eae0-4ed9-a653-304ed853ada3-kube-api-access-xxqsd\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.110025 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d6ed5aa-d2e5-4622-8506-4ef0502af8c2-trusted-ca\") pod \"console-operator-58897d9998-77m65\" (UID: \"5d6ed5aa-d2e5-4622-8506-4ef0502af8c2\") " pod="openshift-console-operator/console-operator-58897d9998-77m65" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.110058 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d79537f2-b8d8-4f6f-8c38-65701d8c1c77-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x82g8\" (UID: \"d79537f2-b8d8-4f6f-8c38-65701d8c1c77\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.110092 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fca7e4e-2791-4d94-b4c5-efc53888b49f-cert\") pod \"ingress-canary-w225p\" (UID: \"8fca7e4e-2791-4d94-b4c5-efc53888b49f\") " pod="openshift-ingress-canary/ingress-canary-w225p" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.110124 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b445fc1-17c9-4bae-9641-6dc5388ddb28-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-78xq9\" (UID: \"5b445fc1-17c9-4bae-9641-6dc5388ddb28\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78xq9" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.110165 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.110201 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d03f4c56-f429-4911-814d-02610d24f7ec-console-oauth-config\") pod \"console-f9d7485db-rn5ms\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.110234 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e42dbc11-eae0-4ed9-a653-304ed853ada3-audit-dir\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.110370 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb0c25ff-7e97-493b-8e57-e25312c1403b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hbjk6\" (UID: \"fb0c25ff-7e97-493b-8e57-e25312c1403b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hbjk6" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.110414 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9x9p\" (UniqueName: \"kubernetes.io/projected/2425979c-a518-4bd1-9289-032b9f57e016-kube-api-access-g9x9p\") pod \"dns-default-qvvdv\" (UID: \"2425979c-a518-4bd1-9289-032b9f57e016\") " pod="openshift-dns/dns-default-qvvdv" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.110487 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-client-ca\") pod \"controller-manager-879f6c89f-q85bd\" (UID: \"14f032d2-80a3-4c8a-a5c8-b82764dc2f18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.110524 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.110567 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlskl\" (UniqueName: \"kubernetes.io/projected/ea23ad9d-a037-4f7c-bf28-0e7c27fe0a2d-kube-api-access-tlskl\") pod \"service-ca-9c57cc56f-wmdwc\" (UID: \"ea23ad9d-a037-4f7c-bf28-0e7c27fe0a2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-wmdwc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.110601 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a137242-a113-4f50-b83f-38cce1ae8ca2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-npnjq\" (UID: \"1a137242-a113-4f50-b83f-38cce1ae8ca2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-npnjq" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.110765 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8fc9ae95-b72d-42a3-943d-30c652843b61-node-pullsecrets\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.111479 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fc9ae95-b72d-42a3-943d-30c652843b61-config\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.112258 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e0774fdb-3589-46f5-b263-00664fd69836-etcd-ca\") pod \"etcd-operator-b45778765-sp9km\" (UID: \"e0774fdb-3589-46f5-b263-00664fd69836\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sp9km" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.112717 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d79537f2-b8d8-4f6f-8c38-65701d8c1c77-serving-cert\") pod \"authentication-operator-69f744f599-x82g8\" (UID: \"d79537f2-b8d8-4f6f-8c38-65701d8c1c77\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.112866 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-config\") pod \"controller-manager-879f6c89f-q85bd\" (UID: \"14f032d2-80a3-4c8a-a5c8-b82764dc2f18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.114470 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8fc9ae95-b72d-42a3-943d-30c652843b61-audit\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.114647 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e0774fdb-3589-46f5-b263-00664fd69836-etcd-service-ca\") pod \"etcd-operator-b45778765-sp9km\" (UID: \"e0774fdb-3589-46f5-b263-00664fd69836\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sp9km" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.114774 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7013b72c-2c60-4174-b7e9-a62de8263d50-ca-trust-extracted\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.115278 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.115426 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.115904 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d03f4c56-f429-4911-814d-02610d24f7ec-trusted-ca-bundle\") pod \"console-f9d7485db-rn5ms\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.115949 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d6ed5aa-d2e5-4622-8506-4ef0502af8c2-config\") pod \"console-operator-58897d9998-77m65\" (UID: \"5d6ed5aa-d2e5-4622-8506-4ef0502af8c2\") " pod="openshift-console-operator/console-operator-58897d9998-77m65" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.109099 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d03f4c56-f429-4911-814d-02610d24f7ec-oauth-serving-cert\") pod \"console-f9d7485db-rn5ms\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.115961 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-q85bd\" (UID: \"14f032d2-80a3-4c8a-a5c8-b82764dc2f18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.115996 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7013b72c-2c60-4174-b7e9-a62de8263d50-trusted-ca\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.116008 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/824c61b2-5c56-4bc5-8462-6582a2ac0465-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l4jt8\" (UID: \"824c61b2-5c56-4bc5-8462-6582a2ac0465\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4jt8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.116883 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fc9ae95-b72d-42a3-943d-30c652843b61-serving-cert\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.117197 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d03f4c56-f429-4911-814d-02610d24f7ec-service-ca\") pod \"console-f9d7485db-rn5ms\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.117234 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d79537f2-b8d8-4f6f-8c38-65701d8c1c77-config\") pod \"authentication-operator-69f744f599-x82g8\" (UID: \"d79537f2-b8d8-4f6f-8c38-65701d8c1c77\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.117393 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fc9ae95-b72d-42a3-943d-30c652843b61-trusted-ca-bundle\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.117819 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8fc9ae95-b72d-42a3-943d-30c652843b61-etcd-serving-ca\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.118124 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.118143 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d79537f2-b8d8-4f6f-8c38-65701d8c1c77-service-ca-bundle\") pod \"authentication-operator-69f744f599-x82g8\" (UID: \"d79537f2-b8d8-4f6f-8c38-65701d8c1c77\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.118677 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e42dbc11-eae0-4ed9-a653-304ed853ada3-audit-policies\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.119073 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/824c61b2-5c56-4bc5-8462-6582a2ac0465-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l4jt8\" (UID: \"824c61b2-5c56-4bc5-8462-6582a2ac0465\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4jt8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.119086 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-serving-cert\") pod \"controller-manager-879f6c89f-q85bd\" (UID: \"14f032d2-80a3-4c8a-a5c8-b82764dc2f18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.119557 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d03f4c56-f429-4911-814d-02610d24f7ec-console-config\") pod \"console-f9d7485db-rn5ms\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.120038 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.120171 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8fc9ae95-b72d-42a3-943d-30c652843b61-audit-dir\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: E0219 21:01:49.120391 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:49.620373569 +0000 UTC m=+140.248216619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.120658 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0774fdb-3589-46f5-b263-00664fd69836-config\") pod \"etcd-operator-b45778765-sp9km\" (UID: \"e0774fdb-3589-46f5-b263-00664fd69836\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sp9km" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.120809 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e42dbc11-eae0-4ed9-a653-304ed853ada3-audit-dir\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.120917 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8fc9ae95-b72d-42a3-943d-30c652843b61-etcd-client\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.121510 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d6ed5aa-d2e5-4622-8506-4ef0502af8c2-trusted-ca\") pod \"console-operator-58897d9998-77m65\" (UID: \"5d6ed5aa-d2e5-4622-8506-4ef0502af8c2\") " pod="openshift-console-operator/console-operator-58897d9998-77m65" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.121542 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7013b72c-2c60-4174-b7e9-a62de8263d50-registry-certificates\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.121770 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d79537f2-b8d8-4f6f-8c38-65701d8c1c77-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x82g8\" (UID: \"d79537f2-b8d8-4f6f-8c38-65701d8c1c77\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.121812 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb0c25ff-7e97-493b-8e57-e25312c1403b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hbjk6\" (UID: \"fb0c25ff-7e97-493b-8e57-e25312c1403b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hbjk6" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.122812 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.122952 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.123086 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-client-ca\") pod \"controller-manager-879f6c89f-q85bd\" (UID: \"14f032d2-80a3-4c8a-a5c8-b82764dc2f18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.123130 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.123624 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7013b72c-2c60-4174-b7e9-a62de8263d50-registry-tls\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.123768 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7013b72c-2c60-4174-b7e9-a62de8263d50-installation-pull-secrets\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.125042 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8fc9ae95-b72d-42a3-943d-30c652843b61-encryption-config\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.126677 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.126738 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d03f4c56-f429-4911-814d-02610d24f7ec-console-oauth-config\") pod \"console-f9d7485db-rn5ms\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.127414 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.127846 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/07134087-479c-4666-a80c-7c50a2a5a005-metrics-tls\") pod \"dns-operator-744455d44c-kjzmh\" (UID: \"07134087-479c-4666-a80c-7c50a2a5a005\") " pod="openshift-dns-operator/dns-operator-744455d44c-kjzmh" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.127954 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0774fdb-3589-46f5-b263-00664fd69836-serving-cert\") pod \"etcd-operator-b45778765-sp9km\" (UID: \"e0774fdb-3589-46f5-b263-00664fd69836\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sp9km" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.128231 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb0c25ff-7e97-493b-8e57-e25312c1403b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hbjk6\" (UID: \"fb0c25ff-7e97-493b-8e57-e25312c1403b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hbjk6" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.129233 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e0774fdb-3589-46f5-b263-00664fd69836-etcd-client\") pod \"etcd-operator-b45778765-sp9km\" (UID: \"e0774fdb-3589-46f5-b263-00664fd69836\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sp9km" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.129347 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d03f4c56-f429-4911-814d-02610d24f7ec-console-serving-cert\") pod \"console-f9d7485db-rn5ms\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.130117 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.130155 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/81e09dc2-360c-45f3-ae38-1d40f0345fcb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tdlnc\" (UID: \"81e09dc2-360c-45f3-ae38-1d40f0345fcb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tdlnc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.131336 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d6ed5aa-d2e5-4622-8506-4ef0502af8c2-serving-cert\") pod \"console-operator-58897d9998-77m65\" (UID: \"5d6ed5aa-d2e5-4622-8506-4ef0502af8c2\") " pod="openshift-console-operator/console-operator-58897d9998-77m65" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.131547 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.167049 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htgjh\" (UniqueName: \"kubernetes.io/projected/fb0c25ff-7e97-493b-8e57-e25312c1403b-kube-api-access-htgjh\") pod \"cluster-image-registry-operator-dc59b4c8b-hbjk6\" (UID: \"fb0c25ff-7e97-493b-8e57-e25312c1403b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hbjk6" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.208139 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6rlp\" (UniqueName: \"kubernetes.io/projected/824c61b2-5c56-4bc5-8462-6582a2ac0465-kube-api-access-k6rlp\") pod \"openshift-controller-manager-operator-756b6f6bc6-l4jt8\" (UID: \"824c61b2-5c56-4bc5-8462-6582a2ac0465\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4jt8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.208544 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq8cl\" (UniqueName: \"kubernetes.io/projected/e0774fdb-3589-46f5-b263-00664fd69836-kube-api-access-tq8cl\") pod \"etcd-operator-b45778765-sp9km\" (UID: \"e0774fdb-3589-46f5-b263-00664fd69836\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sp9km" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.215738 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.215878 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f21bc18b-845c-491a-8d27-4cdb035e26bc-plugins-dir\") pod \"csi-hostpathplugin-qjcds\" (UID: \"f21bc18b-845c-491a-8d27-4cdb035e26bc\") " pod="hostpath-provisioner/csi-hostpathplugin-qjcds" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.215901 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a137242-a113-4f50-b83f-38cce1ae8ca2-config\") pod \"kube-controller-manager-operator-78b949d7b-npnjq\" (UID: \"1a137242-a113-4f50-b83f-38cce1ae8ca2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-npnjq" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.215919 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/22d58380-a045-498b-aa0e-d07a603210ff-profile-collector-cert\") pod \"catalog-operator-68c6474976-s659c\" (UID: \"22d58380-a045-498b-aa0e-d07a603210ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.215934 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a137242-a113-4f50-b83f-38cce1ae8ca2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-npnjq\" (UID: \"1a137242-a113-4f50-b83f-38cce1ae8ca2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-npnjq" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.215962 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b5c741c-61db-4b35-997b-8edd406b5b01-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nqhlp\" (UID: \"8b5c741c-61db-4b35-997b-8edd406b5b01\") " pod="openshift-marketplace/marketplace-operator-79b997595-nqhlp" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.215978 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ea23ad9d-a037-4f7c-bf28-0e7c27fe0a2d-signing-key\") pod \"service-ca-9c57cc56f-wmdwc\" (UID: \"ea23ad9d-a037-4f7c-bf28-0e7c27fe0a2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-wmdwc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.215994 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46wmt\" (UniqueName: \"kubernetes.io/projected/646ec78e-4771-472a-8d9e-5356a4799f59-kube-api-access-46wmt\") pod \"machine-config-controller-84d6567774-bhfmd\" (UID: \"646ec78e-4771-472a-8d9e-5356a4799f59\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhfmd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216009 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw765\" (UniqueName: \"kubernetes.io/projected/762d12a4-6d88-4715-923e-916dfc4ecad3-kube-api-access-fw765\") pod \"package-server-manager-789f6589d5-l7kln\" (UID: \"762d12a4-6d88-4715-923e-916dfc4ecad3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216027 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67f954f3-df5a-4b74-b54e-1b29fee2a572-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dd7wx\" (UID: \"67f954f3-df5a-4b74-b54e-1b29fee2a572\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dd7wx" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216044 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/63178e9b-177f-4dd9-ac6c-e54c48264262-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bz9b2\" (UID: \"63178e9b-177f-4dd9-ac6c-e54c48264262\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bz9b2" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216063 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/19ce12ce-88ca-4e70-bce7-c87d5d064955-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-n8kml\" (UID: \"19ce12ce-88ca-4e70-bce7-c87d5d064955\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n8kml" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216084 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fca7e4e-2791-4d94-b4c5-efc53888b49f-cert\") pod \"ingress-canary-w225p\" (UID: \"8fca7e4e-2791-4d94-b4c5-efc53888b49f\") " pod="openshift-ingress-canary/ingress-canary-w225p" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216099 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b445fc1-17c9-4bae-9641-6dc5388ddb28-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-78xq9\" (UID: \"5b445fc1-17c9-4bae-9641-6dc5388ddb28\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78xq9" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216122 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9x9p\" (UniqueName: \"kubernetes.io/projected/2425979c-a518-4bd1-9289-032b9f57e016-kube-api-access-g9x9p\") pod \"dns-default-qvvdv\" (UID: \"2425979c-a518-4bd1-9289-032b9f57e016\") " pod="openshift-dns/dns-default-qvvdv" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216138 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlskl\" (UniqueName: \"kubernetes.io/projected/ea23ad9d-a037-4f7c-bf28-0e7c27fe0a2d-kube-api-access-tlskl\") pod \"service-ca-9c57cc56f-wmdwc\" (UID: \"ea23ad9d-a037-4f7c-bf28-0e7c27fe0a2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-wmdwc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216156 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a137242-a113-4f50-b83f-38cce1ae8ca2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-npnjq\" (UID: \"1a137242-a113-4f50-b83f-38cce1ae8ca2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-npnjq" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216172 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c7870ee6-dc66-4746-9a0a-481377abc8a7-node-bootstrap-token\") pod \"machine-config-server-bgmxp\" (UID: \"c7870ee6-dc66-4746-9a0a-481377abc8a7\") " pod="openshift-machine-config-operator/machine-config-server-bgmxp" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216197 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b171410-3c97-4589-b62d-2190a13cbb3e-service-ca-bundle\") pod \"router-default-5444994796-2wz7p\" (UID: \"6b171410-3c97-4589-b62d-2190a13cbb3e\") " pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216212 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f21bc18b-845c-491a-8d27-4cdb035e26bc-socket-dir\") pod \"csi-hostpathplugin-qjcds\" (UID: \"f21bc18b-845c-491a-8d27-4cdb035e26bc\") " pod="hostpath-provisioner/csi-hostpathplugin-qjcds" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216233 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f21bc18b-845c-491a-8d27-4cdb035e26bc-mountpoint-dir\") pod \"csi-hostpathplugin-qjcds\" (UID: \"f21bc18b-845c-491a-8d27-4cdb035e26bc\") " pod="hostpath-provisioner/csi-hostpathplugin-qjcds" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216248 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc9kr\" (UniqueName: \"kubernetes.io/projected/8fca7e4e-2791-4d94-b4c5-efc53888b49f-kube-api-access-gc9kr\") pod \"ingress-canary-w225p\" (UID: \"8fca7e4e-2791-4d94-b4c5-efc53888b49f\") " pod="openshift-ingress-canary/ingress-canary-w225p" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216277 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/67f954f3-df5a-4b74-b54e-1b29fee2a572-images\") pod \"machine-config-operator-74547568cd-dd7wx\" (UID: \"67f954f3-df5a-4b74-b54e-1b29fee2a572\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dd7wx" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216291 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2425979c-a518-4bd1-9289-032b9f57e016-metrics-tls\") pod \"dns-default-qvvdv\" (UID: \"2425979c-a518-4bd1-9289-032b9f57e016\") " pod="openshift-dns/dns-default-qvvdv" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216315 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvmmr\" (UniqueName: \"kubernetes.io/projected/67f954f3-df5a-4b74-b54e-1b29fee2a572-kube-api-access-nvmmr\") pod \"machine-config-operator-74547568cd-dd7wx\" (UID: \"67f954f3-df5a-4b74-b54e-1b29fee2a572\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dd7wx" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216331 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3260efb9-ca7f-4ad6-af84-4302737e8250-config\") pod \"service-ca-operator-777779d784-2d7s4\" (UID: \"3260efb9-ca7f-4ad6-af84-4302737e8250\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2d7s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216354 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wsq2\" (UniqueName: \"kubernetes.io/projected/49e472dc-470f-43d1-8fb8-31e53a1a1c40-kube-api-access-8wsq2\") pod \"control-plane-machine-set-operator-78cbb6b69f-b9c4j\" (UID: \"49e472dc-470f-43d1-8fb8-31e53a1a1c40\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b9c4j" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216371 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7675fc04-f59f-4fc8-9bd2-e4a67bdba66a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-wfhmg\" (UID: \"7675fc04-f59f-4fc8-9bd2-e4a67bdba66a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wfhmg" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216400 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dbb72093-2c40-4a92-b1bf-18d8175fb1c8-tmpfs\") pod \"packageserver-d55dfcdfc-r54js\" (UID: \"dbb72093-2c40-4a92-b1bf-18d8175fb1c8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216416 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn9vf\" (UniqueName: \"kubernetes.io/projected/8b5c741c-61db-4b35-997b-8edd406b5b01-kube-api-access-qn9vf\") pod \"marketplace-operator-79b997595-nqhlp\" (UID: \"8b5c741c-61db-4b35-997b-8edd406b5b01\") " pod="openshift-marketplace/marketplace-operator-79b997595-nqhlp" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216430 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0fd9b417-e431-4afc-b6a7-5c269fa04171-srv-cert\") pod \"olm-operator-6b444d44fb-t8hr5\" (UID: \"0fd9b417-e431-4afc-b6a7-5c269fa04171\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216455 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f21bc18b-845c-491a-8d27-4cdb035e26bc-csi-data-dir\") pod \"csi-hostpathplugin-qjcds\" (UID: \"f21bc18b-845c-491a-8d27-4cdb035e26bc\") " pod="hostpath-provisioner/csi-hostpathplugin-qjcds" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216470 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/762d12a4-6d88-4715-923e-916dfc4ecad3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-l7kln\" (UID: \"762d12a4-6d88-4715-923e-916dfc4ecad3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216487 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/202fcc8c-4e14-4336-aaef-22f33ff09ece-config-volume\") pod \"collect-profiles-29525580-5pbww\" (UID: \"202fcc8c-4e14-4336-aaef-22f33ff09ece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-5pbww" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216503 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/646ec78e-4771-472a-8d9e-5356a4799f59-proxy-tls\") pod \"machine-config-controller-84d6567774-bhfmd\" (UID: \"646ec78e-4771-472a-8d9e-5356a4799f59\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhfmd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216516 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21e66e32-ffac-421f-ab01-252cb8d4e589-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2brfc\" (UID: \"21e66e32-ffac-421f-ab01-252cb8d4e589\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2brfc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216535 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6b171410-3c97-4589-b62d-2190a13cbb3e-stats-auth\") pod \"router-default-5444994796-2wz7p\" (UID: \"6b171410-3c97-4589-b62d-2190a13cbb3e\") " pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216565 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn8tx\" (UniqueName: \"kubernetes.io/projected/f21bc18b-845c-491a-8d27-4cdb035e26bc-kube-api-access-hn8tx\") pod \"csi-hostpathplugin-qjcds\" (UID: \"f21bc18b-845c-491a-8d27-4cdb035e26bc\") " pod="hostpath-provisioner/csi-hostpathplugin-qjcds" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216581 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hhtn\" (UniqueName: \"kubernetes.io/projected/dbb72093-2c40-4a92-b1bf-18d8175fb1c8-kube-api-access-7hhtn\") pod \"packageserver-d55dfcdfc-r54js\" (UID: \"dbb72093-2c40-4a92-b1bf-18d8175fb1c8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216595 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fbpb\" (UniqueName: \"kubernetes.io/projected/c7870ee6-dc66-4746-9a0a-481377abc8a7-kube-api-access-4fbpb\") pod \"machine-config-server-bgmxp\" (UID: \"c7870ee6-dc66-4746-9a0a-481377abc8a7\") " pod="openshift-machine-config-operator/machine-config-server-bgmxp" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216610 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bppx\" (UniqueName: \"kubernetes.io/projected/19ce12ce-88ca-4e70-bce7-c87d5d064955-kube-api-access-8bppx\") pod \"multus-admission-controller-857f4d67dd-n8kml\" (UID: \"19ce12ce-88ca-4e70-bce7-c87d5d064955\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n8kml" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216627 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dbb72093-2c40-4a92-b1bf-18d8175fb1c8-apiservice-cert\") pod \"packageserver-d55dfcdfc-r54js\" (UID: \"dbb72093-2c40-4a92-b1bf-18d8175fb1c8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216641 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7675fc04-f59f-4fc8-9bd2-e4a67bdba66a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-wfhmg\" (UID: \"7675fc04-f59f-4fc8-9bd2-e4a67bdba66a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wfhmg" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216656 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49k4m\" (UniqueName: \"kubernetes.io/projected/7675fc04-f59f-4fc8-9bd2-e4a67bdba66a-kube-api-access-49k4m\") pod \"kube-storage-version-migrator-operator-b67b599dd-wfhmg\" (UID: \"7675fc04-f59f-4fc8-9bd2-e4a67bdba66a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wfhmg" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216670 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgz4k\" (UniqueName: \"kubernetes.io/projected/202fcc8c-4e14-4336-aaef-22f33ff09ece-kube-api-access-fgz4k\") pod \"collect-profiles-29525580-5pbww\" (UID: \"202fcc8c-4e14-4336-aaef-22f33ff09ece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-5pbww" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216687 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ea23ad9d-a037-4f7c-bf28-0e7c27fe0a2d-signing-cabundle\") pod \"service-ca-9c57cc56f-wmdwc\" (UID: \"ea23ad9d-a037-4f7c-bf28-0e7c27fe0a2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-wmdwc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216701 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/67f954f3-df5a-4b74-b54e-1b29fee2a572-proxy-tls\") pod \"machine-config-operator-74547568cd-dd7wx\" (UID: \"67f954f3-df5a-4b74-b54e-1b29fee2a572\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dd7wx" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216704 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f21bc18b-845c-491a-8d27-4cdb035e26bc-plugins-dir\") pod \"csi-hostpathplugin-qjcds\" (UID: \"f21bc18b-845c-491a-8d27-4cdb035e26bc\") " pod="hostpath-provisioner/csi-hostpathplugin-qjcds" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.217283 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/67f954f3-df5a-4b74-b54e-1b29fee2a572-images\") pod \"machine-config-operator-74547568cd-dd7wx\" (UID: \"67f954f3-df5a-4b74-b54e-1b29fee2a572\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dd7wx" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.217664 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b5c741c-61db-4b35-997b-8edd406b5b01-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nqhlp\" (UID: \"8b5c741c-61db-4b35-997b-8edd406b5b01\") " pod="openshift-marketplace/marketplace-operator-79b997595-nqhlp" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.224148 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a137242-a113-4f50-b83f-38cce1ae8ca2-config\") pod \"kube-controller-manager-operator-78b949d7b-npnjq\" (UID: \"1a137242-a113-4f50-b83f-38cce1ae8ca2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-npnjq" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.225590 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67f954f3-df5a-4b74-b54e-1b29fee2a572-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dd7wx\" (UID: \"67f954f3-df5a-4b74-b54e-1b29fee2a572\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dd7wx" Feb 19 21:01:49 crc kubenswrapper[4886]: E0219 21:01:49.225859 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:49.725836579 +0000 UTC m=+140.353679649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.216715 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbb72093-2c40-4a92-b1bf-18d8175fb1c8-webhook-cert\") pod \"packageserver-d55dfcdfc-r54js\" (UID: \"dbb72093-2c40-4a92-b1bf-18d8175fb1c8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.226314 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmmml\" (UniqueName: \"kubernetes.io/projected/0fd9b417-e431-4afc-b6a7-5c269fa04171-kube-api-access-zmmml\") pod \"olm-operator-6b444d44fb-t8hr5\" (UID: \"0fd9b417-e431-4afc-b6a7-5c269fa04171\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.226485 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0fd9b417-e431-4afc-b6a7-5c269fa04171-profile-collector-cert\") pod \"olm-operator-6b444d44fb-t8hr5\" (UID: \"0fd9b417-e431-4afc-b6a7-5c269fa04171\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.227504 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b171410-3c97-4589-b62d-2190a13cbb3e-metrics-certs\") pod \"router-default-5444994796-2wz7p\" (UID: \"6b171410-3c97-4589-b62d-2190a13cbb3e\") " pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.227542 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/22d58380-a045-498b-aa0e-d07a603210ff-srv-cert\") pod \"catalog-operator-68c6474976-s659c\" (UID: \"22d58380-a045-498b-aa0e-d07a603210ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.227560 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b445fc1-17c9-4bae-9641-6dc5388ddb28-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-78xq9\" (UID: \"5b445fc1-17c9-4bae-9641-6dc5388ddb28\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78xq9" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.227576 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8b5c741c-61db-4b35-997b-8edd406b5b01-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nqhlp\" (UID: \"8b5c741c-61db-4b35-997b-8edd406b5b01\") " pod="openshift-marketplace/marketplace-operator-79b997595-nqhlp" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.227596 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/646ec78e-4771-472a-8d9e-5356a4799f59-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bhfmd\" (UID: \"646ec78e-4771-472a-8d9e-5356a4799f59\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhfmd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.227611 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c7870ee6-dc66-4746-9a0a-481377abc8a7-certs\") pod \"machine-config-server-bgmxp\" (UID: \"c7870ee6-dc66-4746-9a0a-481377abc8a7\") " pod="openshift-machine-config-operator/machine-config-server-bgmxp" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.227865 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f21bc18b-845c-491a-8d27-4cdb035e26bc-socket-dir\") pod \"csi-hostpathplugin-qjcds\" (UID: \"f21bc18b-845c-491a-8d27-4cdb035e26bc\") " pod="hostpath-provisioner/csi-hostpathplugin-qjcds" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.228240 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f21bc18b-845c-491a-8d27-4cdb035e26bc-mountpoint-dir\") pod \"csi-hostpathplugin-qjcds\" (UID: \"f21bc18b-845c-491a-8d27-4cdb035e26bc\") " pod="hostpath-provisioner/csi-hostpathplugin-qjcds" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.228426 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b171410-3c97-4589-b62d-2190a13cbb3e-service-ca-bundle\") pod \"router-default-5444994796-2wz7p\" (UID: \"6b171410-3c97-4589-b62d-2190a13cbb3e\") " pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.228509 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ea23ad9d-a037-4f7c-bf28-0e7c27fe0a2d-signing-cabundle\") pod \"service-ca-9c57cc56f-wmdwc\" (UID: \"ea23ad9d-a037-4f7c-bf28-0e7c27fe0a2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-wmdwc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.229080 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/646ec78e-4771-472a-8d9e-5356a4799f59-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bhfmd\" (UID: \"646ec78e-4771-472a-8d9e-5356a4799f59\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhfmd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.229284 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3260efb9-ca7f-4ad6-af84-4302737e8250-config\") pod \"service-ca-operator-777779d784-2d7s4\" (UID: \"3260efb9-ca7f-4ad6-af84-4302737e8250\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2d7s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.229462 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a137242-a113-4f50-b83f-38cce1ae8ca2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-npnjq\" (UID: \"1a137242-a113-4f50-b83f-38cce1ae8ca2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-npnjq" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.229634 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f21bc18b-845c-491a-8d27-4cdb035e26bc-csi-data-dir\") pod \"csi-hostpathplugin-qjcds\" (UID: \"f21bc18b-845c-491a-8d27-4cdb035e26bc\") " pod="hostpath-provisioner/csi-hostpathplugin-qjcds" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.229830 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b445fc1-17c9-4bae-9641-6dc5388ddb28-config\") pod \"kube-apiserver-operator-766d6c64bb-78xq9\" (UID: \"5b445fc1-17c9-4bae-9641-6dc5388ddb28\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78xq9" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.229870 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np5l2\" (UniqueName: \"kubernetes.io/projected/3260efb9-ca7f-4ad6-af84-4302737e8250-kube-api-access-np5l2\") pod \"service-ca-operator-777779d784-2d7s4\" (UID: \"3260efb9-ca7f-4ad6-af84-4302737e8250\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2d7s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.229890 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjv6r\" (UniqueName: \"kubernetes.io/projected/22d58380-a045-498b-aa0e-d07a603210ff-kube-api-access-qjv6r\") pod \"catalog-operator-68c6474976-s659c\" (UID: \"22d58380-a045-498b-aa0e-d07a603210ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.229905 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63178e9b-177f-4dd9-ac6c-e54c48264262-metrics-tls\") pod \"ingress-operator-5b745b69d9-bz9b2\" (UID: \"63178e9b-177f-4dd9-ac6c-e54c48264262\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bz9b2" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.229922 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3260efb9-ca7f-4ad6-af84-4302737e8250-serving-cert\") pod \"service-ca-operator-777779d784-2d7s4\" (UID: \"3260efb9-ca7f-4ad6-af84-4302737e8250\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2d7s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.229951 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc2hx\" (UniqueName: \"kubernetes.io/projected/a9ba9ff9-e1ed-4a23-83f8-7125da8d1c36-kube-api-access-dc2hx\") pod \"migrator-59844c95c7-5zkr9\" (UID: \"a9ba9ff9-e1ed-4a23-83f8-7125da8d1c36\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5zkr9" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.229973 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/63178e9b-177f-4dd9-ac6c-e54c48264262-trusted-ca\") pod \"ingress-operator-5b745b69d9-bz9b2\" (UID: \"63178e9b-177f-4dd9-ac6c-e54c48264262\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bz9b2" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.229988 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2425979c-a518-4bd1-9289-032b9f57e016-config-volume\") pod \"dns-default-qvvdv\" (UID: \"2425979c-a518-4bd1-9289-032b9f57e016\") " pod="openshift-dns/dns-default-qvvdv" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.230004 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/202fcc8c-4e14-4336-aaef-22f33ff09ece-secret-volume\") pod \"collect-profiles-29525580-5pbww\" (UID: \"202fcc8c-4e14-4336-aaef-22f33ff09ece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-5pbww" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.230019 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e66e32-ffac-421f-ab01-252cb8d4e589-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2brfc\" (UID: \"21e66e32-ffac-421f-ab01-252cb8d4e589\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2brfc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.230041 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/49e472dc-470f-43d1-8fb8-31e53a1a1c40-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-b9c4j\" (UID: \"49e472dc-470f-43d1-8fb8-31e53a1a1c40\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b9c4j" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.230071 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6b171410-3c97-4589-b62d-2190a13cbb3e-default-certificate\") pod \"router-default-5444994796-2wz7p\" (UID: \"6b171410-3c97-4589-b62d-2190a13cbb3e\") " pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.230086 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2669k\" (UniqueName: \"kubernetes.io/projected/6b171410-3c97-4589-b62d-2190a13cbb3e-kube-api-access-2669k\") pod \"router-default-5444994796-2wz7p\" (UID: \"6b171410-3c97-4589-b62d-2190a13cbb3e\") " pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.230105 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21e66e32-ffac-421f-ab01-252cb8d4e589-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2brfc\" (UID: \"21e66e32-ffac-421f-ab01-252cb8d4e589\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2brfc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.230122 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6k7d\" (UniqueName: \"kubernetes.io/projected/63178e9b-177f-4dd9-ac6c-e54c48264262-kube-api-access-w6k7d\") pod \"ingress-operator-5b745b69d9-bz9b2\" (UID: \"63178e9b-177f-4dd9-ac6c-e54c48264262\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bz9b2" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.230139 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f21bc18b-845c-491a-8d27-4cdb035e26bc-registration-dir\") pod \"csi-hostpathplugin-qjcds\" (UID: \"f21bc18b-845c-491a-8d27-4cdb035e26bc\") " pod="hostpath-provisioner/csi-hostpathplugin-qjcds" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.230230 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f21bc18b-845c-491a-8d27-4cdb035e26bc-registration-dir\") pod \"csi-hostpathplugin-qjcds\" (UID: \"f21bc18b-845c-491a-8d27-4cdb035e26bc\") " pod="hostpath-provisioner/csi-hostpathplugin-qjcds" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.230343 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/202fcc8c-4e14-4336-aaef-22f33ff09ece-config-volume\") pod \"collect-profiles-29525580-5pbww\" (UID: \"202fcc8c-4e14-4336-aaef-22f33ff09ece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-5pbww" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.230608 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dbb72093-2c40-4a92-b1bf-18d8175fb1c8-tmpfs\") pod \"packageserver-d55dfcdfc-r54js\" (UID: \"dbb72093-2c40-4a92-b1bf-18d8175fb1c8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.231169 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b445fc1-17c9-4bae-9641-6dc5388ddb28-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-78xq9\" (UID: \"5b445fc1-17c9-4bae-9641-6dc5388ddb28\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78xq9" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.231372 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/19ce12ce-88ca-4e70-bce7-c87d5d064955-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-n8kml\" (UID: \"19ce12ce-88ca-4e70-bce7-c87d5d064955\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n8kml" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.231623 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8b5c741c-61db-4b35-997b-8edd406b5b01-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nqhlp\" (UID: \"8b5c741c-61db-4b35-997b-8edd406b5b01\") " pod="openshift-marketplace/marketplace-operator-79b997595-nqhlp" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.231631 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fca7e4e-2791-4d94-b4c5-efc53888b49f-cert\") pod \"ingress-canary-w225p\" (UID: \"8fca7e4e-2791-4d94-b4c5-efc53888b49f\") " pod="openshift-ingress-canary/ingress-canary-w225p" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.231636 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0fd9b417-e431-4afc-b6a7-5c269fa04171-profile-collector-cert\") pod \"olm-operator-6b444d44fb-t8hr5\" (UID: \"0fd9b417-e431-4afc-b6a7-5c269fa04171\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.231957 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/22d58380-a045-498b-aa0e-d07a603210ff-srv-cert\") pod \"catalog-operator-68c6474976-s659c\" (UID: \"22d58380-a045-498b-aa0e-d07a603210ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.232622 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c7870ee6-dc66-4746-9a0a-481377abc8a7-node-bootstrap-token\") pod \"machine-config-server-bgmxp\" (UID: \"c7870ee6-dc66-4746-9a0a-481377abc8a7\") " pod="openshift-machine-config-operator/machine-config-server-bgmxp" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.233107 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6b171410-3c97-4589-b62d-2190a13cbb3e-stats-auth\") pod \"router-default-5444994796-2wz7p\" (UID: \"6b171410-3c97-4589-b62d-2190a13cbb3e\") " pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.233224 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2425979c-a518-4bd1-9289-032b9f57e016-config-volume\") pod \"dns-default-qvvdv\" (UID: \"2425979c-a518-4bd1-9289-032b9f57e016\") " pod="openshift-dns/dns-default-qvvdv" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.233307 4886 csr.go:261] certificate signing request csr-t7h9f is approved, waiting to be issued Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.233408 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/646ec78e-4771-472a-8d9e-5356a4799f59-proxy-tls\") pod \"machine-config-controller-84d6567774-bhfmd\" (UID: \"646ec78e-4771-472a-8d9e-5356a4799f59\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhfmd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.234473 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b445fc1-17c9-4bae-9641-6dc5388ddb28-config\") pod \"kube-apiserver-operator-766d6c64bb-78xq9\" (UID: \"5b445fc1-17c9-4bae-9641-6dc5388ddb28\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78xq9" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.234542 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b171410-3c97-4589-b62d-2190a13cbb3e-metrics-certs\") pod \"router-default-5444994796-2wz7p\" (UID: \"6b171410-3c97-4589-b62d-2190a13cbb3e\") " pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.234557 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63178e9b-177f-4dd9-ac6c-e54c48264262-metrics-tls\") pod \"ingress-operator-5b745b69d9-bz9b2\" (UID: \"63178e9b-177f-4dd9-ac6c-e54c48264262\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bz9b2" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.234725 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/202fcc8c-4e14-4336-aaef-22f33ff09ece-secret-volume\") pod \"collect-profiles-29525580-5pbww\" (UID: \"202fcc8c-4e14-4336-aaef-22f33ff09ece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-5pbww" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.234738 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e66e32-ffac-421f-ab01-252cb8d4e589-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2brfc\" (UID: \"21e66e32-ffac-421f-ab01-252cb8d4e589\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2brfc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.234989 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/63178e9b-177f-4dd9-ac6c-e54c48264262-trusted-ca\") pod \"ingress-operator-5b745b69d9-bz9b2\" (UID: \"63178e9b-177f-4dd9-ac6c-e54c48264262\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bz9b2" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.235835 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/67f954f3-df5a-4b74-b54e-1b29fee2a572-proxy-tls\") pod \"machine-config-operator-74547568cd-dd7wx\" (UID: \"67f954f3-df5a-4b74-b54e-1b29fee2a572\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dd7wx" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.236017 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21e66e32-ffac-421f-ab01-252cb8d4e589-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2brfc\" (UID: \"21e66e32-ffac-421f-ab01-252cb8d4e589\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2brfc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.236061 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2425979c-a518-4bd1-9289-032b9f57e016-metrics-tls\") pod \"dns-default-qvvdv\" (UID: \"2425979c-a518-4bd1-9289-032b9f57e016\") " pod="openshift-dns/dns-default-qvvdv" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.236077 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7675fc04-f59f-4fc8-9bd2-e4a67bdba66a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-wfhmg\" (UID: \"7675fc04-f59f-4fc8-9bd2-e4a67bdba66a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wfhmg" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.236101 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0fd9b417-e431-4afc-b6a7-5c269fa04171-srv-cert\") pod \"olm-operator-6b444d44fb-t8hr5\" (UID: \"0fd9b417-e431-4afc-b6a7-5c269fa04171\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.236105 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dbb72093-2c40-4a92-b1bf-18d8175fb1c8-apiservice-cert\") pod \"packageserver-d55dfcdfc-r54js\" (UID: \"dbb72093-2c40-4a92-b1bf-18d8175fb1c8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.236133 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6b171410-3c97-4589-b62d-2190a13cbb3e-default-certificate\") pod \"router-default-5444994796-2wz7p\" (UID: \"6b171410-3c97-4589-b62d-2190a13cbb3e\") " pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.236197 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqgx8\" (UniqueName: \"kubernetes.io/projected/81e09dc2-360c-45f3-ae38-1d40f0345fcb-kube-api-access-pqgx8\") pod \"cluster-samples-operator-665b6dd947-tdlnc\" (UID: \"81e09dc2-360c-45f3-ae38-1d40f0345fcb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tdlnc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.239827 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/49e472dc-470f-43d1-8fb8-31e53a1a1c40-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-b9c4j\" (UID: \"49e472dc-470f-43d1-8fb8-31e53a1a1c40\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b9c4j" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.240700 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7675fc04-f59f-4fc8-9bd2-e4a67bdba66a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-wfhmg\" (UID: \"7675fc04-f59f-4fc8-9bd2-e4a67bdba66a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wfhmg" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.242589 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3260efb9-ca7f-4ad6-af84-4302737e8250-serving-cert\") pod \"service-ca-operator-777779d784-2d7s4\" (UID: \"3260efb9-ca7f-4ad6-af84-4302737e8250\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2d7s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.242851 4886 csr.go:257] certificate signing request csr-t7h9f is issued Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.243004 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/22d58380-a045-498b-aa0e-d07a603210ff-profile-collector-cert\") pod \"catalog-operator-68c6474976-s659c\" (UID: \"22d58380-a045-498b-aa0e-d07a603210ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.243495 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ea23ad9d-a037-4f7c-bf28-0e7c27fe0a2d-signing-key\") pod \"service-ca-9c57cc56f-wmdwc\" (UID: \"ea23ad9d-a037-4f7c-bf28-0e7c27fe0a2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-wmdwc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.243539 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c7870ee6-dc66-4746-9a0a-481377abc8a7-certs\") pod \"machine-config-server-bgmxp\" (UID: \"c7870ee6-dc66-4746-9a0a-481377abc8a7\") " pod="openshift-machine-config-operator/machine-config-server-bgmxp" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.246161 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbb72093-2c40-4a92-b1bf-18d8175fb1c8-webhook-cert\") pod \"packageserver-d55dfcdfc-r54js\" (UID: \"dbb72093-2c40-4a92-b1bf-18d8175fb1c8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.247900 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj95l\" (UniqueName: \"kubernetes.io/projected/d03f4c56-f429-4911-814d-02610d24f7ec-kube-api-access-gj95l\") pod \"console-f9d7485db-rn5ms\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.255662 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/762d12a4-6d88-4715-923e-916dfc4ecad3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-l7kln\" (UID: \"762d12a4-6d88-4715-923e-916dfc4ecad3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.265440 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hqp6\" (UniqueName: \"kubernetes.io/projected/d79537f2-b8d8-4f6f-8c38-65701d8c1c77-kube-api-access-4hqp6\") pod \"authentication-operator-69f744f599-x82g8\" (UID: \"d79537f2-b8d8-4f6f-8c38-65701d8c1c77\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.274174 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tdlnc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.278180 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56b95\" (UniqueName: \"kubernetes.io/projected/07134087-479c-4666-a80c-7c50a2a5a005-kube-api-access-56b95\") pod \"dns-operator-744455d44c-kjzmh\" (UID: \"07134087-479c-4666-a80c-7c50a2a5a005\") " pod="openshift-dns-operator/dns-operator-744455d44c-kjzmh" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.282913 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.302626 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s2gr\" (UniqueName: \"kubernetes.io/projected/8fc9ae95-b72d-42a3-943d-30c652843b61-kube-api-access-6s2gr\") pod \"apiserver-76f77b778f-sk46q\" (UID: \"8fc9ae95-b72d-42a3-943d-30c652843b61\") " pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.309922 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4jt8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.316874 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sp9km" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.319503 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk4q2\" (UniqueName: \"kubernetes.io/projected/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-kube-api-access-kk4q2\") pod \"controller-manager-879f6c89f-q85bd\" (UID: \"14f032d2-80a3-4c8a-a5c8-b82764dc2f18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.331312 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: E0219 21:01:49.331764 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:49.83174649 +0000 UTC m=+140.459589540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.331871 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-kjzmh" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.336616 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb0c25ff-7e97-493b-8e57-e25312c1403b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hbjk6\" (UID: \"fb0c25ff-7e97-493b-8e57-e25312c1403b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hbjk6" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.362999 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2cjx\" (UniqueName: \"kubernetes.io/projected/b77d3d20-a193-4bf3-a448-e48059491a85-kube-api-access-r2cjx\") pod \"downloads-7954f5f757-6lvw2\" (UID: \"b77d3d20-a193-4bf3-a448-e48059491a85\") " pod="openshift-console/downloads-7954f5f757-6lvw2" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.379431 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7013b72c-2c60-4174-b7e9-a62de8263d50-bound-sa-token\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.401748 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxqsd\" (UniqueName: \"kubernetes.io/projected/e42dbc11-eae0-4ed9-a653-304ed853ada3-kube-api-access-xxqsd\") pod \"oauth-openshift-558db77b4-5f4xd\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.418973 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mkq9\" (UniqueName: \"kubernetes.io/projected/5d6ed5aa-d2e5-4622-8506-4ef0502af8c2-kube-api-access-5mkq9\") pod \"console-operator-58897d9998-77m65\" (UID: \"5d6ed5aa-d2e5-4622-8506-4ef0502af8c2\") " pod="openshift-console-operator/console-operator-58897d9998-77m65" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.432626 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:49 crc kubenswrapper[4886]: E0219 21:01:49.433079 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:49.93306483 +0000 UTC m=+140.560907880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.440224 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tdlnc"] Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.440874 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb2k9\" (UniqueName: \"kubernetes.io/projected/7013b72c-2c60-4174-b7e9-a62de8263d50-kube-api-access-zb2k9\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.487223 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/63178e9b-177f-4dd9-ac6c-e54c48264262-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bz9b2\" (UID: \"63178e9b-177f-4dd9-ac6c-e54c48264262\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bz9b2" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.509642 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6ntg" event={"ID":"388b84d7-1fa4-4c41-88f3-85a8b836c6a0","Type":"ContainerStarted","Data":"3dbd9bf5ac7285c764748199e01c52562a501378f1caa505efd5c93e05611f4f"} Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.510047 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6ntg" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.510380 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a137242-a113-4f50-b83f-38cce1ae8ca2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-npnjq\" (UID: \"1a137242-a113-4f50-b83f-38cce1ae8ca2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-npnjq" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.512630 4886 generic.go:334] "Generic (PLEG): container finished" podID="a66fafd0-fa91-4368-8262-88fc7ef86dfa" containerID="625448c6d3451ab9dad9b88b06b79f66a3bbfd16504475a82d19a35190a07d41" exitCode=0 Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.514452 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" event={"ID":"a66fafd0-fa91-4368-8262-88fc7ef86dfa","Type":"ContainerDied","Data":"625448c6d3451ab9dad9b88b06b79f66a3bbfd16504475a82d19a35190a07d41"} Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.518895 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc9kr\" (UniqueName: \"kubernetes.io/projected/8fca7e4e-2791-4d94-b4c5-efc53888b49f-kube-api-access-gc9kr\") pod \"ingress-canary-w225p\" (UID: \"8fca7e4e-2791-4d94-b4c5-efc53888b49f\") " pod="openshift-ingress-canary/ingress-canary-w225p" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.528173 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.530383 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.534396 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: E0219 21:01:49.535167 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:50.035154029 +0000 UTC m=+140.662997079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.539093 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b445fc1-17c9-4bae-9641-6dc5388ddb28-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-78xq9\" (UID: \"5b445fc1-17c9-4bae-9641-6dc5388ddb28\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78xq9" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.543994 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.553322 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.557999 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.562124 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw765\" (UniqueName: \"kubernetes.io/projected/762d12a4-6d88-4715-923e-916dfc4ecad3-kube-api-access-fw765\") pod \"package-server-manager-789f6589d5-l7kln\" (UID: \"762d12a4-6d88-4715-923e-916dfc4ecad3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.566560 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-77m65" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.582276 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46wmt\" (UniqueName: \"kubernetes.io/projected/646ec78e-4771-472a-8d9e-5356a4799f59-kube-api-access-46wmt\") pod \"machine-config-controller-84d6567774-bhfmd\" (UID: \"646ec78e-4771-472a-8d9e-5356a4799f59\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhfmd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.593541 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6lvw2" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.601001 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9x9p\" (UniqueName: \"kubernetes.io/projected/2425979c-a518-4bd1-9289-032b9f57e016-kube-api-access-g9x9p\") pod \"dns-default-qvvdv\" (UID: \"2425979c-a518-4bd1-9289-032b9f57e016\") " pod="openshift-dns/dns-default-qvvdv" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.602429 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hbjk6" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.635489 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:49 crc kubenswrapper[4886]: E0219 21:01:49.636201 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:50.136185203 +0000 UTC m=+140.764028253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.653911 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlskl\" (UniqueName: \"kubernetes.io/projected/ea23ad9d-a037-4f7c-bf28-0e7c27fe0a2d-kube-api-access-tlskl\") pod \"service-ca-9c57cc56f-wmdwc\" (UID: \"ea23ad9d-a037-4f7c-bf28-0e7c27fe0a2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-wmdwc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.658592 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49k4m\" (UniqueName: \"kubernetes.io/projected/7675fc04-f59f-4fc8-9bd2-e4a67bdba66a-kube-api-access-49k4m\") pod \"kube-storage-version-migrator-operator-b67b599dd-wfhmg\" (UID: \"7675fc04-f59f-4fc8-9bd2-e4a67bdba66a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wfhmg" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.659273 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgz4k\" (UniqueName: \"kubernetes.io/projected/202fcc8c-4e14-4336-aaef-22f33ff09ece-kube-api-access-fgz4k\") pod \"collect-profiles-29525580-5pbww\" (UID: \"202fcc8c-4e14-4336-aaef-22f33ff09ece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-5pbww" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.666354 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wfhmg" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.679121 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvmmr\" (UniqueName: \"kubernetes.io/projected/67f954f3-df5a-4b74-b54e-1b29fee2a572-kube-api-access-nvmmr\") pod \"machine-config-operator-74547568cd-dd7wx\" (UID: \"67f954f3-df5a-4b74-b54e-1b29fee2a572\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dd7wx" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.679906 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78xq9" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.687469 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-kjzmh"] Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.689091 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-npnjq" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.698640 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.699106 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wsq2\" (UniqueName: \"kubernetes.io/projected/49e472dc-470f-43d1-8fb8-31e53a1a1c40-kube-api-access-8wsq2\") pod \"control-plane-machine-set-operator-78cbb6b69f-b9c4j\" (UID: \"49e472dc-470f-43d1-8fb8-31e53a1a1c40\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b9c4j" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.718760 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhfmd" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.721309 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hhtn\" (UniqueName: \"kubernetes.io/projected/dbb72093-2c40-4a92-b1bf-18d8175fb1c8-kube-api-access-7hhtn\") pod \"packageserver-d55dfcdfc-r54js\" (UID: \"dbb72093-2c40-4a92-b1bf-18d8175fb1c8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.734620 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-w225p" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.736728 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.736829 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fbpb\" (UniqueName: \"kubernetes.io/projected/c7870ee6-dc66-4746-9a0a-481377abc8a7-kube-api-access-4fbpb\") pod \"machine-config-server-bgmxp\" (UID: \"c7870ee6-dc66-4746-9a0a-481377abc8a7\") " pod="openshift-machine-config-operator/machine-config-server-bgmxp" Feb 19 21:01:49 crc kubenswrapper[4886]: E0219 21:01:49.737183 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:50.237170876 +0000 UTC m=+140.865013926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.742404 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.751584 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dd7wx" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.756566 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x82g8"] Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.762865 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bppx\" (UniqueName: \"kubernetes.io/projected/19ce12ce-88ca-4e70-bce7-c87d5d064955-kube-api-access-8bppx\") pod \"multus-admission-controller-857f4d67dd-n8kml\" (UID: \"19ce12ce-88ca-4e70-bce7-c87d5d064955\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n8kml" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.778509 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b9c4j" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.783943 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmmml\" (UniqueName: \"kubernetes.io/projected/0fd9b417-e431-4afc-b6a7-5c269fa04171-kube-api-access-zmmml\") pod \"olm-operator-6b444d44fb-t8hr5\" (UID: \"0fd9b417-e431-4afc-b6a7-5c269fa04171\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.785153 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-5pbww" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.791794 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-wmdwc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.810721 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5f4xd"] Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.817252 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sp9km"] Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.827752 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn8tx\" (UniqueName: \"kubernetes.io/projected/f21bc18b-845c-491a-8d27-4cdb035e26bc-kube-api-access-hn8tx\") pod \"csi-hostpathplugin-qjcds\" (UID: \"f21bc18b-845c-491a-8d27-4cdb035e26bc\") " pod="hostpath-provisioner/csi-hostpathplugin-qjcds" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.834420 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bgmxp" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.834801 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn9vf\" (UniqueName: \"kubernetes.io/projected/8b5c741c-61db-4b35-997b-8edd406b5b01-kube-api-access-qn9vf\") pod \"marketplace-operator-79b997595-nqhlp\" (UID: \"8b5c741c-61db-4b35-997b-8edd406b5b01\") " pod="openshift-marketplace/marketplace-operator-79b997595-nqhlp" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.845283 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:49 crc kubenswrapper[4886]: E0219 21:01:49.845681 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:50.345666029 +0000 UTC m=+140.973509079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.849084 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qvvdv" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.858829 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4jt8"] Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.858867 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-rn5ms"] Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.864440 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np5l2\" (UniqueName: \"kubernetes.io/projected/3260efb9-ca7f-4ad6-af84-4302737e8250-kube-api-access-np5l2\") pod \"service-ca-operator-777779d784-2d7s4\" (UID: \"3260efb9-ca7f-4ad6-af84-4302737e8250\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2d7s4" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.878699 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjv6r\" (UniqueName: \"kubernetes.io/projected/22d58380-a045-498b-aa0e-d07a603210ff-kube-api-access-qjv6r\") pod \"catalog-operator-68c6474976-s659c\" (UID: \"22d58380-a045-498b-aa0e-d07a603210ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.881663 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21e66e32-ffac-421f-ab01-252cb8d4e589-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2brfc\" (UID: \"21e66e32-ffac-421f-ab01-252cb8d4e589\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2brfc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.910961 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2669k\" (UniqueName: \"kubernetes.io/projected/6b171410-3c97-4589-b62d-2190a13cbb3e-kube-api-access-2669k\") pod \"router-default-5444994796-2wz7p\" (UID: \"6b171410-3c97-4589-b62d-2190a13cbb3e\") " pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.924720 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6k7d\" (UniqueName: \"kubernetes.io/projected/63178e9b-177f-4dd9-ac6c-e54c48264262-kube-api-access-w6k7d\") pod \"ingress-operator-5b745b69d9-bz9b2\" (UID: \"63178e9b-177f-4dd9-ac6c-e54c48264262\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bz9b2" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.941084 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bz9b2" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.950791 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc2hx\" (UniqueName: \"kubernetes.io/projected/a9ba9ff9-e1ed-4a23-83f8-7125da8d1c36-kube-api-access-dc2hx\") pod \"migrator-59844c95c7-5zkr9\" (UID: \"a9ba9ff9-e1ed-4a23-83f8-7125da8d1c36\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5zkr9" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.951511 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:49 crc kubenswrapper[4886]: E0219 21:01:49.951788 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:50.451774274 +0000 UTC m=+141.079617324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.951796 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.954197 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-sk46q"] Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.957791 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2brfc" Feb 19 21:01:49 crc kubenswrapper[4886]: I0219 21:01:49.976224 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-n8kml" Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.008403 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nqhlp" Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.028800 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.052616 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:50 crc kubenswrapper[4886]: E0219 21:01:50.052819 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:50.552789948 +0000 UTC m=+141.180632998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.052876 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:50 crc kubenswrapper[4886]: E0219 21:01:50.053332 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:50.553325311 +0000 UTC m=+141.181168361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.059762 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5zkr9" Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.066498 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.072031 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2d7s4" Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.122036 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qjcds" Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.157593 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:50 crc kubenswrapper[4886]: E0219 21:01:50.158177 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:50.658152046 +0000 UTC m=+141.285995096 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.215792 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-q85bd"] Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.241672 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wfhmg"] Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.243720 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-19 20:56:49 +0000 UTC, rotation deadline is 2027-01-08 02:39:21.913844794 +0000 UTC Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.243763 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7733h37m31.670083515s for next certificate rotation Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.244099 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-77m65"] Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.259453 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:50 crc kubenswrapper[4886]: E0219 21:01:50.259760 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:50.759749773 +0000 UTC m=+141.387592823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.326470 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hbjk6"] Feb 19 21:01:50 crc kubenswrapper[4886]: E0219 21:01:50.360863 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:50.860812697 +0000 UTC m=+141.488655747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.362449 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.364420 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:50 crc kubenswrapper[4886]: E0219 21:01:50.364781 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:50.864768932 +0000 UTC m=+141.492611982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:50 crc kubenswrapper[4886]: W0219 21:01:50.411556 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7675fc04_f59f_4fc8_9bd2_e4a67bdba66a.slice/crio-a5b487eb7347401fba81ac99034285e92bead1cdd678e0814ddc88ff37316293 WatchSource:0}: Error finding container a5b487eb7347401fba81ac99034285e92bead1cdd678e0814ddc88ff37316293: Status 404 returned error can't find the container with id a5b487eb7347401fba81ac99034285e92bead1cdd678e0814ddc88ff37316293 Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.467420 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:50 crc kubenswrapper[4886]: E0219 21:01:50.467601 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:50.967564688 +0000 UTC m=+141.595407738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.467658 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:50 crc kubenswrapper[4886]: E0219 21:01:50.467961 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:50.967950258 +0000 UTC m=+141.595793308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.501299 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b9c4j"] Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.521406 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd" podStartSLOduration=116.5213767 podStartE2EDuration="1m56.5213767s" podCreationTimestamp="2026-02-19 20:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:50.519813022 +0000 UTC m=+141.147656062" watchObservedRunningTime="2026-02-19 21:01:50.5213767 +0000 UTC m=+141.149219750" Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.526841 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-npnjq"] Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.557914 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6lvw2"] Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.568613 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:50 crc kubenswrapper[4886]: E0219 21:01:50.568767 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:51.068744026 +0000 UTC m=+141.696587076 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.568796 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:50 crc kubenswrapper[4886]: E0219 21:01:50.569129 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:51.069118475 +0000 UTC m=+141.696961525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.648233 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pt2hw" podStartSLOduration=117.648212422 podStartE2EDuration="1m57.648212422s" podCreationTimestamp="2026-02-19 20:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:50.6422896 +0000 UTC m=+141.270132650" watchObservedRunningTime="2026-02-19 21:01:50.648212422 +0000 UTC m=+141.276055472" Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.657834 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78xq9"] Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.657870 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4jt8" event={"ID":"824c61b2-5c56-4bc5-8462-6582a2ac0465","Type":"ContainerStarted","Data":"4ff5af9144f64ccce413513204adaf3607e6f93a486b9a01799325d0205c0acc"} Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.657887 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-77m65" event={"ID":"5d6ed5aa-d2e5-4622-8506-4ef0502af8c2","Type":"ContainerStarted","Data":"fdc37476b9fa41569178f3749f3a90ca6b7d4445fe3bd54ffd509d7aefbf9ae4"} Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.657897 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2wz7p" event={"ID":"6b171410-3c97-4589-b62d-2190a13cbb3e","Type":"ContainerStarted","Data":"96a66ce4c395cedcc75d909ed65971167356f7ba90ce83ae792bc44cef03a34a"} Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.657908 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-kjzmh" event={"ID":"07134087-479c-4666-a80c-7c50a2a5a005","Type":"ContainerStarted","Data":"46433a2b776f75ad73c6ea1e8d773537eb50fb042801e2f0108f6786f9e4f787"} Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.657916 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-kjzmh" event={"ID":"07134087-479c-4666-a80c-7c50a2a5a005","Type":"ContainerStarted","Data":"6a06633eca10e0e58d33a161f90a46ac9012acc4612f9c9d6845d262260dcab3"} Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.658011 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" event={"ID":"e42dbc11-eae0-4ed9-a653-304ed853ada3","Type":"ContainerStarted","Data":"e07229e732b34ead6e23eaedb6fb6fcb67b3e6f0151ea9608319b812707d6aee"} Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.663787 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sp9km" event={"ID":"e0774fdb-3589-46f5-b263-00664fd69836","Type":"ContainerStarted","Data":"05008d3cd0fbb8e7f7ede21d674a66fdaa85da377053b48753a279ca712fb507"} Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.666175 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bgmxp" event={"ID":"c7870ee6-dc66-4746-9a0a-481377abc8a7","Type":"ContainerStarted","Data":"3f78f426e7be0e9f6317bff085a681bec3b39a31242409e1aef7006ec5388ca9"} Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.667567 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-sk46q" event={"ID":"8fc9ae95-b72d-42a3-943d-30c652843b61","Type":"ContainerStarted","Data":"e3f91f75d8b76f151a956daa6fce28df13af7a12e12b36dee010747865ed6f79"} Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.669618 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:50 crc kubenswrapper[4886]: E0219 21:01:50.669980 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:51.169964444 +0000 UTC m=+141.797807494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.701215 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" event={"ID":"a66fafd0-fa91-4368-8262-88fc7ef86dfa","Type":"ContainerStarted","Data":"57476172dc5f1e8446bde50bb9dc1809f189eade9fb3e70fd5c5672003f11085"} Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.710400 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" event={"ID":"14f032d2-80a3-4c8a-a5c8-b82764dc2f18","Type":"ContainerStarted","Data":"3c1de9a087077a196ad09c487a0e820cde092a7d350945867a8d1e04ffb974ac"} Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.717657 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rn5ms" event={"ID":"d03f4c56-f429-4911-814d-02610d24f7ec","Type":"ContainerStarted","Data":"7398c33e8ae2fc639ebb6839f096840df6221e0b1ed47c6818ca40b0e1d2daa9"} Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.730000 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" event={"ID":"d79537f2-b8d8-4f6f-8c38-65701d8c1c77","Type":"ContainerStarted","Data":"d628486841c930d70ac0ea808568e989d3ebe7dc1deeb8b3432ac9c1ff883006"} Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.730353 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" event={"ID":"d79537f2-b8d8-4f6f-8c38-65701d8c1c77","Type":"ContainerStarted","Data":"2ba960d9d05230a98b70285236b8543170c8fac0a0c238b96336ffedd59760ab"} Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.732809 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hbjk6" event={"ID":"fb0c25ff-7e97-493b-8e57-e25312c1403b","Type":"ContainerStarted","Data":"ddf63a67d015aae43690a984ccb2fc183fc3926b63b51f745bc3a5cb1d85c65f"} Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.736500 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tdlnc" event={"ID":"81e09dc2-360c-45f3-ae38-1d40f0345fcb","Type":"ContainerStarted","Data":"f29d7b21f7112e1638b73643690edca9eee0342ada93f8197e7ee01ab1bf231f"} Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.736542 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tdlnc" event={"ID":"81e09dc2-360c-45f3-ae38-1d40f0345fcb","Type":"ContainerStarted","Data":"cba20a184ee07847fef61790bf9c1ee505ea9a44558158e70f3ed3e6e4f7f9f4"} Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.738806 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wfhmg" event={"ID":"7675fc04-f59f-4fc8-9bd2-e4a67bdba66a","Type":"ContainerStarted","Data":"a5b487eb7347401fba81ac99034285e92bead1cdd678e0814ddc88ff37316293"} Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.771166 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:50 crc kubenswrapper[4886]: E0219 21:01:50.772568 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:51.272553855 +0000 UTC m=+141.900396995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.838999 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wmdwc"] Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.871976 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:50 crc kubenswrapper[4886]: E0219 21:01:50.872236 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:51.372206746 +0000 UTC m=+142.000049796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.872376 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:50 crc kubenswrapper[4886]: E0219 21:01:50.872675 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:51.372663377 +0000 UTC m=+142.000506427 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.879531 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qvvdv"] Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.903089 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dd7wx"] Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.906839 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-w225p"] Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.910661 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js"] Feb 19 21:01:50 crc kubenswrapper[4886]: W0219 21:01:50.957832 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67f954f3_df5a_4b74_b54e_1b29fee2a572.slice/crio-5f920147bafabc02c31bd4ffe832c861e0d0916d874de0c3b8d18a3a94e991cb WatchSource:0}: Error finding container 5f920147bafabc02c31bd4ffe832c861e0d0916d874de0c3b8d18a3a94e991cb: Status 404 returned error can't find the container with id 5f920147bafabc02c31bd4ffe832c861e0d0916d874de0c3b8d18a3a94e991cb Feb 19 21:01:50 crc kubenswrapper[4886]: I0219 21:01:50.977926 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:50 crc kubenswrapper[4886]: E0219 21:01:50.978199 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:51.478185218 +0000 UTC m=+142.106028268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.004351 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525580-5pbww"] Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.022142 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2brfc"] Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.023375 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-n8kml"] Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.031948 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nqhlp"] Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.041332 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln"] Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.074050 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5"] Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.077238 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bhfmd"] Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.079640 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:51 crc kubenswrapper[4886]: E0219 21:01:51.080058 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:51.580046232 +0000 UTC m=+142.207889282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.146074 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qjcds"] Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.147769 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c"] Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.167614 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jg8st" podStartSLOduration=118.167600493 podStartE2EDuration="1m58.167600493s" podCreationTimestamp="2026-02-19 20:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:51.165183315 +0000 UTC m=+141.793026375" watchObservedRunningTime="2026-02-19 21:01:51.167600493 +0000 UTC m=+141.795443543" Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.182377 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:51 crc kubenswrapper[4886]: E0219 21:01:51.182718 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:51.682704885 +0000 UTC m=+142.310547935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.220912 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5zkr9"] Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.225801 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bz9b2"] Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.235043 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2d7s4"] Feb 19 21:01:51 crc kubenswrapper[4886]: W0219 21:01:51.264025 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63178e9b_177f_4dd9_ac6c_e54c48264262.slice/crio-08dd4ddd559d2c73151ca76cdfce4a672df5e7b5ec3ba354e0bab1f195b89f36 WatchSource:0}: Error finding container 08dd4ddd559d2c73151ca76cdfce4a672df5e7b5ec3ba354e0bab1f195b89f36: Status 404 returned error can't find the container with id 08dd4ddd559d2c73151ca76cdfce4a672df5e7b5ec3ba354e0bab1f195b89f36 Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.286515 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:51 crc kubenswrapper[4886]: E0219 21:01:51.286899 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:51.786880893 +0000 UTC m=+142.414723943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.360807 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-bwbrv" podStartSLOduration=117.360788576 podStartE2EDuration="1m57.360788576s" podCreationTimestamp="2026-02-19 20:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:51.359414693 +0000 UTC m=+141.987257743" watchObservedRunningTime="2026-02-19 21:01:51.360788576 +0000 UTC m=+141.988631626" Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.387316 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:51 crc kubenswrapper[4886]: E0219 21:01:51.387740 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:51.887715992 +0000 UTC m=+142.515559042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.481483 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6ntg" podStartSLOduration=118.481426661 podStartE2EDuration="1m58.481426661s" podCreationTimestamp="2026-02-19 20:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:51.475100679 +0000 UTC m=+142.102943729" watchObservedRunningTime="2026-02-19 21:01:51.481426661 +0000 UTC m=+142.109269711" Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.489929 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:51 crc kubenswrapper[4886]: E0219 21:01:51.490253 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:51.990241632 +0000 UTC m=+142.618084672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.591085 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:51 crc kubenswrapper[4886]: E0219 21:01:51.591705 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:52.091691286 +0000 UTC m=+142.719534336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.617006 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" podStartSLOduration=118.616983673 podStartE2EDuration="1m58.616983673s" podCreationTimestamp="2026-02-19 20:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:51.599506243 +0000 UTC m=+142.227349283" watchObservedRunningTime="2026-02-19 21:01:51.616983673 +0000 UTC m=+142.244826713" Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.666645 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" podStartSLOduration=117.666628924 podStartE2EDuration="1m57.666628924s" podCreationTimestamp="2026-02-19 20:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:51.656364627 +0000 UTC m=+142.284207667" watchObservedRunningTime="2026-02-19 21:01:51.666628924 +0000 UTC m=+142.294471964" Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.688726 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-rn5ms" podStartSLOduration=118.688707623 podStartE2EDuration="1m58.688707623s" podCreationTimestamp="2026-02-19 20:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:51.678837967 +0000 UTC m=+142.306681007" watchObservedRunningTime="2026-02-19 21:01:51.688707623 +0000 UTC m=+142.316550673" Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.695762 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:51 crc kubenswrapper[4886]: E0219 21:01:51.696120 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:52.19609182 +0000 UTC m=+142.823934870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.760936 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dd7wx" event={"ID":"67f954f3-df5a-4b74-b54e-1b29fee2a572","Type":"ContainerStarted","Data":"6a914584f93d467d8fcb4ae106558d38e108107f96bd5c851af0d64ee15ac14a"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.761242 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dd7wx" event={"ID":"67f954f3-df5a-4b74-b54e-1b29fee2a572","Type":"ContainerStarted","Data":"5f920147bafabc02c31bd4ffe832c861e0d0916d874de0c3b8d18a3a94e991cb"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.774380 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sp9km" event={"ID":"e0774fdb-3589-46f5-b263-00664fd69836","Type":"ContainerStarted","Data":"f2c6fd4b4ebc31cd37bcf03f8f44c72e0416cb8a72c2293a853340992413d0fd"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.779144 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5zkr9" event={"ID":"a9ba9ff9-e1ed-4a23-83f8-7125da8d1c36","Type":"ContainerStarted","Data":"4279442d1148a01eaccd46e1ec071bb6ac4c7678857fa74503a617c2ebe0262b"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.779910 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhfmd" event={"ID":"646ec78e-4771-472a-8d9e-5356a4799f59","Type":"ContainerStarted","Data":"389f657ee6ee83bb73f164e9c657ca69af15bfb141a69a21f7cac340259d7a0f"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.782096 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78xq9" event={"ID":"5b445fc1-17c9-4bae-9641-6dc5388ddb28","Type":"ContainerStarted","Data":"b12b80f4d3dc28288c47105d9f99e219fb667b4bee59460de57a1ffdf8e39dee"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.782116 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78xq9" event={"ID":"5b445fc1-17c9-4bae-9641-6dc5388ddb28","Type":"ContainerStarted","Data":"e444b53e8e8702b920d37dcb3302b4526671c40d153c74afe82047fcf93927af"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.795926 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tdlnc" event={"ID":"81e09dc2-360c-45f3-ae38-1d40f0345fcb","Type":"ContainerStarted","Data":"aa396659bf7ba4afd009f4e6cafc7d767f78506c7895ed79369de53eb46617b2"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.796745 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:51 crc kubenswrapper[4886]: E0219 21:01:51.797036 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:52.297023292 +0000 UTC m=+142.924866342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.806844 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-w225p" event={"ID":"8fca7e4e-2791-4d94-b4c5-efc53888b49f","Type":"ContainerStarted","Data":"6e572a72535b79360d5a224d45d4294dcf7f22dd699d33b1bb2cd4c11236076d"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.806890 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-w225p" event={"ID":"8fca7e4e-2791-4d94-b4c5-efc53888b49f","Type":"ContainerStarted","Data":"6fd24f12325676f81202b8f21f38cf91d71db616639e2b87471c83fcf6c3a540"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.816317 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2wz7p" event={"ID":"6b171410-3c97-4589-b62d-2190a13cbb3e","Type":"ContainerStarted","Data":"77d5bb2375829a1ba73d0ec9c623fa93c14ac465ea228b73ecc32cedd015fe81"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.819129 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-n8kml" event={"ID":"19ce12ce-88ca-4e70-bce7-c87d5d064955","Type":"ContainerStarted","Data":"3f2c66d46acfddbb9906ab93791dab012d7389a058e7d6d97210dc4e24e83526"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.821600 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-kjzmh" event={"ID":"07134087-479c-4666-a80c-7c50a2a5a005","Type":"ContainerStarted","Data":"e0ebc8cc54612e2c340a65d410a51d6ed3c3d74b73d78d087387892e34ccf6f0"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.827380 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bgmxp" event={"ID":"c7870ee6-dc66-4746-9a0a-481377abc8a7","Type":"ContainerStarted","Data":"6daac312b956d3cb637786f92f9b18d315aa5040d33ab385cf807025dc5a9290"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.830904 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-5pbww" event={"ID":"202fcc8c-4e14-4336-aaef-22f33ff09ece","Type":"ContainerStarted","Data":"2bb2cae5a335cf79ddf50032b6eedc51a3d6b83f20b545c8dca3fea682f546b8"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.836422 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qvvdv" event={"ID":"2425979c-a518-4bd1-9289-032b9f57e016","Type":"ContainerStarted","Data":"0c6a291ed0e2df90a4b4496628d803849dcb7b751dfc3aa80265ea401bf7f767"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.837420 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qvvdv" event={"ID":"2425979c-a518-4bd1-9289-032b9f57e016","Type":"ContainerStarted","Data":"84fc91437b9199ed271994eb76540b19fa7be85054d210ad49f274aeb38d4327"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.849511 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" event={"ID":"762d12a4-6d88-4715-923e-916dfc4ecad3","Type":"ContainerStarted","Data":"c64fb97a84836f7a747e0714fcdbdf609bd7a384dae78e3f77699dc1f5d8169a"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.849539 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" event={"ID":"762d12a4-6d88-4715-923e-916dfc4ecad3","Type":"ContainerStarted","Data":"54078929d08ac823e442ce5c17d33bbd6e4a7451f5fce17b441ce595fd918219"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.856363 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-npnjq" event={"ID":"1a137242-a113-4f50-b83f-38cce1ae8ca2","Type":"ContainerStarted","Data":"51ffd634b43fadd800dff122a6a82e88270e5e07c5eeef282de52c2bac865c05"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.856395 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-npnjq" event={"ID":"1a137242-a113-4f50-b83f-38cce1ae8ca2","Type":"ContainerStarted","Data":"02c3623ef3145c6b22c7c377d5064231dd2130beb762eba210e135f610e6daa5"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.863309 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bz9b2" event={"ID":"63178e9b-177f-4dd9-ac6c-e54c48264262","Type":"ContainerStarted","Data":"08dd4ddd559d2c73151ca76cdfce4a672df5e7b5ec3ba354e0bab1f195b89f36"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.866890 4886 generic.go:334] "Generic (PLEG): container finished" podID="8fc9ae95-b72d-42a3-943d-30c652843b61" containerID="d454174f6d75acba5350364ffe8e3d0828004333e931ffc95276b99caaefc86c" exitCode=0 Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.866943 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-sk46q" event={"ID":"8fc9ae95-b72d-42a3-943d-30c652843b61","Type":"ContainerDied","Data":"d454174f6d75acba5350364ffe8e3d0828004333e931ffc95276b99caaefc86c"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.877202 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wfhmg" event={"ID":"7675fc04-f59f-4fc8-9bd2-e4a67bdba66a","Type":"ContainerStarted","Data":"456a19b2d78e9cb87263f87b69c75dd14d50b85cf67450b88298d1fda2430def"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.880058 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qjcds" event={"ID":"f21bc18b-845c-491a-8d27-4cdb035e26bc","Type":"ContainerStarted","Data":"d5970402760dad55f336b524cea8a57258f8e842c1ecb4dc22d4bce829a5af13"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.883379 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" event={"ID":"22d58380-a045-498b-aa0e-d07a603210ff","Type":"ContainerStarted","Data":"61add223ff29949e100cd2c15db938aac726e54d598c0ea9174a8a5dc24ce551"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.884316 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.897452 4886 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-s659c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.897497 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" podUID="22d58380-a045-498b-aa0e-d07a603210ff" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.899354 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.900646 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" event={"ID":"e42dbc11-eae0-4ed9-a653-304ed853ada3","Type":"ContainerStarted","Data":"ae2633f181a6f087d6b51b9e646b37a2069f47dda22d35fa2edfdd82b90dda0b"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.901167 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:51 crc kubenswrapper[4886]: E0219 21:01:51.901614 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:52.401603041 +0000 UTC m=+143.029446091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.903163 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" event={"ID":"0fd9b417-e431-4afc-b6a7-5c269fa04171","Type":"ContainerStarted","Data":"b1a69a16f44956c9afcd7f874dfdca2387086ca389dc1b5bb9464e03162fbd89"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.909932 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-77m65" event={"ID":"5d6ed5aa-d2e5-4622-8506-4ef0502af8c2","Type":"ContainerStarted","Data":"58c368dab609f56b60b359eef9a7977b93dc89c3f28ec1c50df2606fe949ea1d"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.910384 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-77m65" Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.930933 4886 patch_prober.go:28] interesting pod/console-operator-58897d9998-77m65 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.930978 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-77m65" podUID="5d6ed5aa-d2e5-4622-8506-4ef0502af8c2" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.933854 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6lvw2" event={"ID":"b77d3d20-a193-4bf3-a448-e48059491a85","Type":"ContainerStarted","Data":"5b762d4270bc500cb97a12003603383111fe7f99f66c6fc7afcbf535233d53df"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.933895 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6lvw2" event={"ID":"b77d3d20-a193-4bf3-a448-e48059491a85","Type":"ContainerStarted","Data":"1ec6c13a73cdcc30f192d7bf87a8b432b326bb19a0805ceedbb1c95a16257e85"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.934681 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-6lvw2" Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.937372 4886 patch_prober.go:28] interesting pod/downloads-7954f5f757-6lvw2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.937488 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6lvw2" podUID="b77d3d20-a193-4bf3-a448-e48059491a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.951702 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rn5ms" event={"ID":"d03f4c56-f429-4911-814d-02610d24f7ec","Type":"ContainerStarted","Data":"2e57b15ff3778c21dc48216f964fff02a500f3370379a78a7ac0288d341ffe8f"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.953361 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.965642 4886 patch_prober.go:28] interesting pod/router-default-5444994796-2wz7p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 21:01:51 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Feb 19 21:01:51 crc kubenswrapper[4886]: [+]process-running ok Feb 19 21:01:51 crc kubenswrapper[4886]: healthz check failed Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.965680 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2wz7p" podUID="6b171410-3c97-4589-b62d-2190a13cbb3e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.965875 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nqhlp" event={"ID":"8b5c741c-61db-4b35-997b-8edd406b5b01","Type":"ContainerStarted","Data":"9c431e15080ab3017fb7d52960ddb41e68c2380bbd65ec92c2ef9ac5551cb70b"} Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.967502 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-nqhlp" Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.969534 4886 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nqhlp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.969580 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nqhlp" podUID="8b5c741c-61db-4b35-997b-8edd406b5b01" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Feb 19 21:01:51 crc kubenswrapper[4886]: I0219 21:01:51.985437 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hbjk6" event={"ID":"fb0c25ff-7e97-493b-8e57-e25312c1403b","Type":"ContainerStarted","Data":"14d033d9186bf6efd2607dd851eb848c94504b17c5582e0a8816c135acacfccd"} Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.001172 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:52 crc kubenswrapper[4886]: E0219 21:01:52.002158 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:52.502140173 +0000 UTC m=+143.129983223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.051861 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4jt8" event={"ID":"824c61b2-5c56-4bc5-8462-6582a2ac0465","Type":"ContainerStarted","Data":"f36a08184781d71d68f6dddd3684894449b8917b8fe5685069c26e62599f6fee"} Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.054100 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" event={"ID":"dbb72093-2c40-4a92-b1bf-18d8175fb1c8","Type":"ContainerStarted","Data":"0a10435e9325dfbab5af57e0a2eef876b5876904c17b9d7fb25deec0364e6084"} Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.054126 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" event={"ID":"dbb72093-2c40-4a92-b1bf-18d8175fb1c8","Type":"ContainerStarted","Data":"65cf2d7186dd848b569efaf0aab215ca39e7f0b25524fffb1b46b8078564b309"} Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.054904 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.061150 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b9c4j" event={"ID":"49e472dc-470f-43d1-8fb8-31e53a1a1c40","Type":"ContainerStarted","Data":"0bb9f44d285c240db9e204b3e0a17a3e5f075f7211a3d9043daaf59268e370c6"} Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.061183 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b9c4j" event={"ID":"49e472dc-470f-43d1-8fb8-31e53a1a1c40","Type":"ContainerStarted","Data":"ce622887483349d0c3cbe36c3047c13605a84083c7071ff111a8df53d83aa881"} Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.062731 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2d7s4" event={"ID":"3260efb9-ca7f-4ad6-af84-4302737e8250","Type":"ContainerStarted","Data":"d0ca66a50732a3a56671ab3f161e0ea1ec5b53bb0c86992a7ce10bcbef4e1fe5"} Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.064483 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" event={"ID":"14f032d2-80a3-4c8a-a5c8-b82764dc2f18","Type":"ContainerStarted","Data":"fff01ee8edebbbf0f57a3bbad6d323173505289f196876a2e22e0953e6f4555d"} Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.070434 4886 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54js container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.070485 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" podUID="dbb72093-2c40-4a92-b1bf-18d8175fb1c8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.077921 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2brfc" event={"ID":"21e66e32-ffac-421f-ab01-252cb8d4e589","Type":"ContainerStarted","Data":"1e4b543e95a52ae93edebdc27c6916bd68704c4563cf61c80af4ed19f57242f6"} Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.080870 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.082232 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-wmdwc" event={"ID":"ea23ad9d-a037-4f7c-bf28-0e7c27fe0a2d","Type":"ContainerStarted","Data":"c5d7f6d5343d32f49e09a7231697bcf847a0803c729577354f9419a89e47dd6f"} Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.082298 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-wmdwc" event={"ID":"ea23ad9d-a037-4f7c-bf28-0e7c27fe0a2d","Type":"ContainerStarted","Data":"907d00c8e51a9aa628a0414396427a99c87e8ce3a6f9c63c4ea4fb60e94d3dd7"} Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.107292 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:52 crc kubenswrapper[4886]: E0219 21:01:52.128391 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:52.62834114 +0000 UTC m=+143.256184190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.209213 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:52 crc kubenswrapper[4886]: E0219 21:01:52.209537 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:52.709523138 +0000 UTC m=+143.337366188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.311095 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:52 crc kubenswrapper[4886]: E0219 21:01:52.311427 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:52.811411232 +0000 UTC m=+143.439254282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.361814 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-kjzmh" podStartSLOduration=118.361802121 podStartE2EDuration="1m58.361802121s" podCreationTimestamp="2026-02-19 20:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:52.361049703 +0000 UTC m=+142.988892753" watchObservedRunningTime="2026-02-19 21:01:52.361802121 +0000 UTC m=+142.989645171" Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.395124 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-npnjq" podStartSLOduration=118.39510293 podStartE2EDuration="1m58.39510293s" podCreationTimestamp="2026-02-19 20:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:52.392364194 +0000 UTC m=+143.020207234" watchObservedRunningTime="2026-02-19 21:01:52.39510293 +0000 UTC m=+143.022945980" Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.412197 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:52 crc kubenswrapper[4886]: E0219 21:01:52.412376 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:52.912340044 +0000 UTC m=+143.540183094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.412464 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:52 crc kubenswrapper[4886]: E0219 21:01:52.412784 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:52.912773084 +0000 UTC m=+143.540616134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.481108 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-wmdwc" podStartSLOduration=118.481090743 podStartE2EDuration="1m58.481090743s" podCreationTimestamp="2026-02-19 20:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:52.480496959 +0000 UTC m=+143.108340009" watchObservedRunningTime="2026-02-19 21:01:52.481090743 +0000 UTC m=+143.108933783" Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.513620 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:52 crc kubenswrapper[4886]: E0219 21:01:52.514060 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:53.014042844 +0000 UTC m=+143.641885894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.527243 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-6lvw2" podStartSLOduration=119.52722678 podStartE2EDuration="1m59.52722678s" podCreationTimestamp="2026-02-19 20:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:52.526332918 +0000 UTC m=+143.154175968" watchObservedRunningTime="2026-02-19 21:01:52.52722678 +0000 UTC m=+143.155069830" Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.556839 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78xq9" podStartSLOduration=118.55682414 podStartE2EDuration="1m58.55682414s" podCreationTimestamp="2026-02-19 20:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:52.556161624 +0000 UTC m=+143.184004674" watchObservedRunningTime="2026-02-19 21:01:52.55682414 +0000 UTC m=+143.184667190" Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.595357 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" podStartSLOduration=119.595339194 podStartE2EDuration="1m59.595339194s" podCreationTimestamp="2026-02-19 20:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:52.594703849 +0000 UTC m=+143.222546899" watchObservedRunningTime="2026-02-19 21:01:52.595339194 +0000 UTC m=+143.223182244" Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.618902 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:52 crc kubenswrapper[4886]: E0219 21:01:52.619163 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:53.119152825 +0000 UTC m=+143.746995875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.677407 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-sp9km" podStartSLOduration=118.677391732 podStartE2EDuration="1m58.677391732s" podCreationTimestamp="2026-02-19 20:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:52.675870146 +0000 UTC m=+143.303713196" watchObservedRunningTime="2026-02-19 21:01:52.677391732 +0000 UTC m=+143.305234782" Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.677961 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-77m65" podStartSLOduration=119.677956816 podStartE2EDuration="1m59.677956816s" podCreationTimestamp="2026-02-19 20:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:52.642842374 +0000 UTC m=+143.270685424" watchObservedRunningTime="2026-02-19 21:01:52.677956816 +0000 UTC m=+143.305799866" Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.707615 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.707687 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.716653 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" podStartSLOduration=118.716580913 podStartE2EDuration="1m58.716580913s" podCreationTimestamp="2026-02-19 20:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:52.711137162 +0000 UTC m=+143.338980212" watchObservedRunningTime="2026-02-19 21:01:52.716580913 +0000 UTC m=+143.344423953" Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.720013 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:52 crc kubenswrapper[4886]: E0219 21:01:52.720319 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:53.220303762 +0000 UTC m=+143.848146812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.721067 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.751281 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-w225p" podStartSLOduration=6.7512500840000005 podStartE2EDuration="6.751250084s" podCreationTimestamp="2026-02-19 21:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:52.75024053 +0000 UTC m=+143.378083580" watchObservedRunningTime="2026-02-19 21:01:52.751250084 +0000 UTC m=+143.379093134" Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.790274 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tdlnc" podStartSLOduration=119.79024839 podStartE2EDuration="1m59.79024839s" podCreationTimestamp="2026-02-19 20:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:52.789932332 +0000 UTC m=+143.417775382" watchObservedRunningTime="2026-02-19 21:01:52.79024839 +0000 UTC m=+143.418091440" Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.821658 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:52 crc kubenswrapper[4886]: E0219 21:01:52.821954 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:53.32193741 +0000 UTC m=+143.949780460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.870493 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2brfc" podStartSLOduration=118.870476715 podStartE2EDuration="1m58.870476715s" podCreationTimestamp="2026-02-19 20:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:52.869915981 +0000 UTC m=+143.497759031" watchObservedRunningTime="2026-02-19 21:01:52.870476715 +0000 UTC m=+143.498319765" Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.872425 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-2wz7p" podStartSLOduration=118.872418361 podStartE2EDuration="1m58.872418361s" podCreationTimestamp="2026-02-19 20:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:52.83319045 +0000 UTC m=+143.461033500" watchObservedRunningTime="2026-02-19 21:01:52.872418361 +0000 UTC m=+143.500261411" Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.914239 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-nqhlp" podStartSLOduration=118.914223224 podStartE2EDuration="1m58.914223224s" podCreationTimestamp="2026-02-19 20:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:52.909670535 +0000 UTC m=+143.537513585" watchObservedRunningTime="2026-02-19 21:01:52.914223224 +0000 UTC m=+143.542066274" Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.922958 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:52 crc kubenswrapper[4886]: E0219 21:01:52.923448 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:53.423430315 +0000 UTC m=+144.051273375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.952026 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wfhmg" podStartSLOduration=118.952012121 podStartE2EDuration="1m58.952012121s" podCreationTimestamp="2026-02-19 20:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:52.950236158 +0000 UTC m=+143.578079228" watchObservedRunningTime="2026-02-19 21:01:52.952012121 +0000 UTC m=+143.579855181" Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.966754 4886 patch_prober.go:28] interesting pod/router-default-5444994796-2wz7p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 21:01:52 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Feb 19 21:01:52 crc kubenswrapper[4886]: [+]process-running ok Feb 19 21:01:52 crc kubenswrapper[4886]: healthz check failed Feb 19 21:01:52 crc kubenswrapper[4886]: I0219 21:01:52.966806 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2wz7p" podUID="6b171410-3c97-4589-b62d-2190a13cbb3e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.024201 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:53 crc kubenswrapper[4886]: E0219 21:01:53.024885 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:53.524775906 +0000 UTC m=+144.152618966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.061546 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b9c4j" podStartSLOduration=119.061523888 podStartE2EDuration="1m59.061523888s" podCreationTimestamp="2026-02-19 20:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:53.009041559 +0000 UTC m=+143.636884619" watchObservedRunningTime="2026-02-19 21:01:53.061523888 +0000 UTC m=+143.689366948" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.096393 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hbjk6" podStartSLOduration=120.096376124 podStartE2EDuration="2m0.096376124s" podCreationTimestamp="2026-02-19 20:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:53.066554969 +0000 UTC m=+143.694398029" watchObservedRunningTime="2026-02-19 21:01:53.096376124 +0000 UTC m=+143.724219174" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.098116 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhfmd" event={"ID":"646ec78e-4771-472a-8d9e-5356a4799f59","Type":"ContainerStarted","Data":"e7f5e64337f569e747c2983cdc6ad37777f3fed1c100b9e8b3abcfaebb852e72"} Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.098146 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhfmd" event={"ID":"646ec78e-4771-472a-8d9e-5356a4799f59","Type":"ContainerStarted","Data":"a083396142cf40fc92adce4c9b65120767690295ed6ac46b92a324cbc3086771"} Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.099709 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2d7s4" event={"ID":"3260efb9-ca7f-4ad6-af84-4302737e8250","Type":"ContainerStarted","Data":"8a392aea1d8e5f84f0629c52e90112034e671ddf5c90bc17d6e5eae97e0dc4f5"} Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.111743 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-5pbww" event={"ID":"202fcc8c-4e14-4336-aaef-22f33ff09ece","Type":"ContainerStarted","Data":"f2f0373543ca7cabe0012cb73b9e89e2ae85e1ff70a900656a93a9716b233cae"} Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.115917 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-sk46q" event={"ID":"8fc9ae95-b72d-42a3-943d-30c652843b61","Type":"ContainerStarted","Data":"ea0e33e272a7a8394439a575f49484d27ae552a625c0c52c8061038b507b8829"} Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.116801 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" event={"ID":"22d58380-a045-498b-aa0e-d07a603210ff","Type":"ContainerStarted","Data":"c9e4b54c2f873bb79b1ae8bd3024248610ab45c427de5e01d40e1750f68f458b"} Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.117471 4886 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-s659c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.117501 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" podUID="22d58380-a045-498b-aa0e-d07a603210ff" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.118624 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" event={"ID":"762d12a4-6d88-4715-923e-916dfc4ecad3","Type":"ContainerStarted","Data":"9916cade98533177c7c201fd250ae0e30e134818d9c29b9ecef93a8d3bd60ff5"} Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.118936 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.129799 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:53 crc kubenswrapper[4886]: E0219 21:01:53.130636 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:53.630623436 +0000 UTC m=+144.258466486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.130722 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-bgmxp" podStartSLOduration=7.130712348 podStartE2EDuration="7.130712348s" podCreationTimestamp="2026-02-19 21:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:53.097634824 +0000 UTC m=+143.725477874" watchObservedRunningTime="2026-02-19 21:01:53.130712348 +0000 UTC m=+143.758555398" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.132117 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" podStartSLOduration=120.132110611 podStartE2EDuration="2m0.132110611s" podCreationTimestamp="2026-02-19 20:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:53.129812556 +0000 UTC m=+143.757655596" watchObservedRunningTime="2026-02-19 21:01:53.132110611 +0000 UTC m=+143.759953661" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.139647 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dd7wx" event={"ID":"67f954f3-df5a-4b74-b54e-1b29fee2a572","Type":"ContainerStarted","Data":"73c43e5fe3006c44925ab2ff5a6bc603e8988bc66858de65d7bef400245ca92d"} Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.151068 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nqhlp" event={"ID":"8b5c741c-61db-4b35-997b-8edd406b5b01","Type":"ContainerStarted","Data":"da907c0eadde67b6513c8a7680e3b2a2483adb025cd1b2740ff66a0500bb14b5"} Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.152045 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2brfc" event={"ID":"21e66e32-ffac-421f-ab01-252cb8d4e589","Type":"ContainerStarted","Data":"67faf610eec3326ece42fb644745b10cf88ab90e4450b65a5f08e86ef29a7c9f"} Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.152056 4886 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nqhlp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.152084 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nqhlp" podUID="8b5c741c-61db-4b35-997b-8edd406b5b01" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.159865 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qvvdv" event={"ID":"2425979c-a518-4bd1-9289-032b9f57e016","Type":"ContainerStarted","Data":"6a60e8de1a3c0a1b110f1c65da1551e5ae1e718edf1e5637b1373c3d4bd0a861"} Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.160096 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-qvvdv" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.161687 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bz9b2" event={"ID":"63178e9b-177f-4dd9-ac6c-e54c48264262","Type":"ContainerStarted","Data":"b7c7e0cf05b7965fab528cae11b0eccaf63ef272ffc7a4ca5ffe12b8e3fe949d"} Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.161714 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bz9b2" event={"ID":"63178e9b-177f-4dd9-ac6c-e54c48264262","Type":"ContainerStarted","Data":"f10951db34acc354edc493e58ecd0f5789781f4330e069d5d63882526fc6cae5"} Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.163374 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5zkr9" event={"ID":"a9ba9ff9-e1ed-4a23-83f8-7125da8d1c36","Type":"ContainerStarted","Data":"2a0d12c187e276c34ab2e73dc9416bda0199663534e75cc7ca66cbd305cf2f0f"} Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.163401 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5zkr9" event={"ID":"a9ba9ff9-e1ed-4a23-83f8-7125da8d1c36","Type":"ContainerStarted","Data":"eda562540be2103a26785a7c154168cc04e9f64f01293a5e55ba6f81b7b6d045"} Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.165021 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" event={"ID":"0fd9b417-e431-4afc-b6a7-5c269fa04171","Type":"ContainerStarted","Data":"22154de7fce099ae817eab468db21013de0cc98e06ec40d2357e1c7ccad7f6c5"} Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.165116 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.173697 4886 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-t8hr5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.173736 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" podUID="0fd9b417-e431-4afc-b6a7-5c269fa04171" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.184227 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-n8kml" event={"ID":"19ce12ce-88ca-4e70-bce7-c87d5d064955","Type":"ContainerStarted","Data":"8881a07c2934d7b03b3597e2b729f62d632031b5d67bf0b89cdefb8132340afe"} Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.184279 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-n8kml" event={"ID":"19ce12ce-88ca-4e70-bce7-c87d5d064955","Type":"ContainerStarted","Data":"af086ed3f225a42df20d506fa485bbaf216f87c0e501e2e9cc1a65894c1b01d6"} Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.195792 4886 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54js container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.195834 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" podUID="dbb72093-2c40-4a92-b1bf-18d8175fb1c8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.196009 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.207564 4886 patch_prober.go:28] interesting pod/downloads-7954f5f757-6lvw2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.207605 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6lvw2" podUID="b77d3d20-a193-4bf3-a448-e48059491a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.210155 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" podStartSLOduration=119.210144823 podStartE2EDuration="1m59.210144823s" podCreationTimestamp="2026-02-19 20:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:53.207811307 +0000 UTC m=+143.835654357" watchObservedRunningTime="2026-02-19 21:01:53.210144823 +0000 UTC m=+143.837987873" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.210432 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4jt8" podStartSLOduration=120.21042868 podStartE2EDuration="2m0.21042868s" podCreationTimestamp="2026-02-19 20:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:53.170577734 +0000 UTC m=+143.798420774" watchObservedRunningTime="2026-02-19 21:01:53.21042868 +0000 UTC m=+143.838271720" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.228164 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.233436 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.234476 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:53 crc kubenswrapper[4886]: E0219 21:01:53.242527 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:53.74251303 +0000 UTC m=+144.370356080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.306274 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-qvvdv" podStartSLOduration=7.306249699 podStartE2EDuration="7.306249699s" podCreationTimestamp="2026-02-19 21:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:53.252567881 +0000 UTC m=+143.880410921" watchObservedRunningTime="2026-02-19 21:01:53.306249699 +0000 UTC m=+143.934092739" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.340914 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:53 crc kubenswrapper[4886]: E0219 21:01:53.341227 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:53.841212378 +0000 UTC m=+144.469055418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.431899 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dd7wx" podStartSLOduration=119.431881653 podStartE2EDuration="1m59.431881653s" podCreationTimestamp="2026-02-19 20:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:53.316612058 +0000 UTC m=+143.944455108" watchObservedRunningTime="2026-02-19 21:01:53.431881653 +0000 UTC m=+144.059724693" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.442855 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:53 crc kubenswrapper[4886]: E0219 21:01:53.443172 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:53.943160954 +0000 UTC m=+144.571004004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.496707 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2d7s4" podStartSLOduration=119.496694718 podStartE2EDuration="1m59.496694718s" podCreationTimestamp="2026-02-19 20:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:53.494121356 +0000 UTC m=+144.121964406" watchObservedRunningTime="2026-02-19 21:01:53.496694718 +0000 UTC m=+144.124537768" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.496822 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhfmd" podStartSLOduration=119.496818741 podStartE2EDuration="1m59.496818741s" podCreationTimestamp="2026-02-19 20:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:53.431828502 +0000 UTC m=+144.059671552" watchObservedRunningTime="2026-02-19 21:01:53.496818741 +0000 UTC m=+144.124661791" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.547611 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:53 crc kubenswrapper[4886]: E0219 21:01:53.547889 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:54.047867556 +0000 UTC m=+144.675710606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.596512 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-77m65" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.633898 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5zkr9" podStartSLOduration=119.633879499 podStartE2EDuration="1m59.633879499s" podCreationTimestamp="2026-02-19 20:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:53.571713608 +0000 UTC m=+144.199556658" watchObservedRunningTime="2026-02-19 21:01:53.633879499 +0000 UTC m=+144.261722549" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.653035 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:53 crc kubenswrapper[4886]: E0219 21:01:53.653339 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:54.153328356 +0000 UTC m=+144.781171406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.671248 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-n8kml" podStartSLOduration=119.671231465 podStartE2EDuration="1m59.671231465s" podCreationTimestamp="2026-02-19 20:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:53.669211367 +0000 UTC m=+144.297054417" watchObservedRunningTime="2026-02-19 21:01:53.671231465 +0000 UTC m=+144.299074515" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.711151 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" podStartSLOduration=119.711137643 podStartE2EDuration="1m59.711137643s" podCreationTimestamp="2026-02-19 20:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:53.709550065 +0000 UTC m=+144.337393115" watchObservedRunningTime="2026-02-19 21:01:53.711137643 +0000 UTC m=+144.338980683" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.753964 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:53 crc kubenswrapper[4886]: E0219 21:01:53.754315 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:54.254300408 +0000 UTC m=+144.882143458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.785964 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-sk46q" podStartSLOduration=120.785946387 podStartE2EDuration="2m0.785946387s" podCreationTimestamp="2026-02-19 20:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:53.758597471 +0000 UTC m=+144.386440531" watchObservedRunningTime="2026-02-19 21:01:53.785946387 +0000 UTC m=+144.413789437" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.817187 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" podStartSLOduration=119.817172436 podStartE2EDuration="1m59.817172436s" podCreationTimestamp="2026-02-19 20:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:53.786992852 +0000 UTC m=+144.414835902" watchObservedRunningTime="2026-02-19 21:01:53.817172436 +0000 UTC m=+144.445015486" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.817873 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bz9b2" podStartSLOduration=119.817868453 podStartE2EDuration="1m59.817868453s" podCreationTimestamp="2026-02-19 20:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:53.81524046 +0000 UTC m=+144.443083510" watchObservedRunningTime="2026-02-19 21:01:53.817868453 +0000 UTC m=+144.445711503" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.855064 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:53 crc kubenswrapper[4886]: E0219 21:01:53.855472 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:54.355456435 +0000 UTC m=+144.983299485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.859617 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-5pbww" podStartSLOduration=113.859602394 podStartE2EDuration="1m53.859602394s" podCreationTimestamp="2026-02-19 21:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:53.857484354 +0000 UTC m=+144.485327404" watchObservedRunningTime="2026-02-19 21:01:53.859602394 +0000 UTC m=+144.487445444" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.954917 4886 patch_prober.go:28] interesting pod/router-default-5444994796-2wz7p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 21:01:53 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Feb 19 21:01:53 crc kubenswrapper[4886]: [+]process-running ok Feb 19 21:01:53 crc kubenswrapper[4886]: healthz check failed Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.954970 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2wz7p" podUID="6b171410-3c97-4589-b62d-2190a13cbb3e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.955832 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:53 crc kubenswrapper[4886]: E0219 21:01:53.955956 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:54.455941826 +0000 UTC m=+145.083784876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.956005 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:53 crc kubenswrapper[4886]: E0219 21:01:53.956300 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:54.456292784 +0000 UTC m=+145.084135834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:53 crc kubenswrapper[4886]: I0219 21:01:53.995304 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6ntg" Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.057283 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:54 crc kubenswrapper[4886]: E0219 21:01:54.057426 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:54.55739967 +0000 UTC m=+145.185242720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.057481 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:54 crc kubenswrapper[4886]: E0219 21:01:54.057962 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:54.557945373 +0000 UTC m=+145.185788423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.159501 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:54 crc kubenswrapper[4886]: E0219 21:01:54.159674 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:54.659649473 +0000 UTC m=+145.287492523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.159817 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:54 crc kubenswrapper[4886]: E0219 21:01:54.160158 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:54.660143005 +0000 UTC m=+145.287986055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.190012 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-sk46q" event={"ID":"8fc9ae95-b72d-42a3-943d-30c652843b61","Type":"ContainerStarted","Data":"ae43d581d3cf7b5df51802fbcbe81bd8a77d8a7adf69bd2007ace83d8acbc8b7"} Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.191357 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qjcds" event={"ID":"f21bc18b-845c-491a-8d27-4cdb035e26bc","Type":"ContainerStarted","Data":"31443324d9b535a09c995fc3ed6dcb5c17179f9a9fd1c490fb07fab41df1cdf7"} Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.191951 4886 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nqhlp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.192008 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nqhlp" podUID="8b5c741c-61db-4b35-997b-8edd406b5b01" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.192063 4886 patch_prober.go:28] interesting pod/downloads-7954f5f757-6lvw2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.192097 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6lvw2" podUID="b77d3d20-a193-4bf3-a448-e48059491a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.208970 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.260364 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:54 crc kubenswrapper[4886]: E0219 21:01:54.261650 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:54.761636049 +0000 UTC m=+145.389479099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.311667 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.363003 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:54 crc kubenswrapper[4886]: E0219 21:01:54.363632 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:54.863620266 +0000 UTC m=+145.491463316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.463893 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:54 crc kubenswrapper[4886]: E0219 21:01:54.464090 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:54.964067646 +0000 UTC m=+145.591910696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.464152 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:54 crc kubenswrapper[4886]: E0219 21:01:54.464639 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:54.964632739 +0000 UTC m=+145.592475779 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.560303 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.560378 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.565377 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:54 crc kubenswrapper[4886]: E0219 21:01:54.565737 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:55.065721535 +0000 UTC m=+145.693564585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.565814 4886 patch_prober.go:28] interesting pod/apiserver-76f77b778f-sk46q container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.565847 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-sk46q" podUID="8fc9ae95-b72d-42a3-943d-30c652843b61" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.667369 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:54 crc kubenswrapper[4886]: E0219 21:01:54.667688 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:55.16767259 +0000 UTC m=+145.795515640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.768317 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:54 crc kubenswrapper[4886]: E0219 21:01:54.768644 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:55.268630762 +0000 UTC m=+145.896473812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.869856 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:54 crc kubenswrapper[4886]: E0219 21:01:54.870158 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:55.370147417 +0000 UTC m=+145.997990467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.956513 4886 patch_prober.go:28] interesting pod/router-default-5444994796-2wz7p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 21:01:54 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Feb 19 21:01:54 crc kubenswrapper[4886]: [+]process-running ok Feb 19 21:01:54 crc kubenswrapper[4886]: healthz check failed Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.956741 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2wz7p" podUID="6b171410-3c97-4589-b62d-2190a13cbb3e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 21:01:54 crc kubenswrapper[4886]: I0219 21:01:54.970674 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:54 crc kubenswrapper[4886]: E0219 21:01:54.971024 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:55.471011307 +0000 UTC m=+146.098854357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.071920 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:55 crc kubenswrapper[4886]: E0219 21:01:55.072168 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:55.572156983 +0000 UTC m=+146.200000033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.173273 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:55 crc kubenswrapper[4886]: E0219 21:01:55.173576 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:55.673561306 +0000 UTC m=+146.301404356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.192568 4886 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54js container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.192617 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" podUID="dbb72093-2c40-4a92-b1bf-18d8175fb1c8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.196981 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qjcds" event={"ID":"f21bc18b-845c-491a-8d27-4cdb035e26bc","Type":"ContainerStarted","Data":"7194d1ca4c6f3b208df98fc01fffba3cf694b0dad440f84f4af241089ff3ee91"} Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.274849 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:55 crc kubenswrapper[4886]: E0219 21:01:55.277340 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:55.777328345 +0000 UTC m=+146.405171395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.375844 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:55 crc kubenswrapper[4886]: E0219 21:01:55.376035 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:55.876017913 +0000 UTC m=+146.503860963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.376163 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:55 crc kubenswrapper[4886]: E0219 21:01:55.376433 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:55.876426153 +0000 UTC m=+146.504269203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.392381 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c89wj"] Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.393405 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c89wj" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.396128 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.406430 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c89wj"] Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.477686 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:55 crc kubenswrapper[4886]: E0219 21:01:55.477931 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:55.977907007 +0000 UTC m=+146.605750057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.478079 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.478117 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca71404c-eec8-471e-a7ab-89f4ee69b025-catalog-content\") pod \"certified-operators-c89wj\" (UID: \"ca71404c-eec8-471e-a7ab-89f4ee69b025\") " pod="openshift-marketplace/certified-operators-c89wj" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.478165 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca71404c-eec8-471e-a7ab-89f4ee69b025-utilities\") pod \"certified-operators-c89wj\" (UID: \"ca71404c-eec8-471e-a7ab-89f4ee69b025\") " pod="openshift-marketplace/certified-operators-c89wj" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.478187 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfs8w\" (UniqueName: \"kubernetes.io/projected/ca71404c-eec8-471e-a7ab-89f4ee69b025-kube-api-access-gfs8w\") pod \"certified-operators-c89wj\" (UID: \"ca71404c-eec8-471e-a7ab-89f4ee69b025\") " pod="openshift-marketplace/certified-operators-c89wj" Feb 19 21:01:55 crc kubenswrapper[4886]: E0219 21:01:55.478471 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:55.978458991 +0000 UTC m=+146.606302041 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.579213 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.579434 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca71404c-eec8-471e-a7ab-89f4ee69b025-utilities\") pod \"certified-operators-c89wj\" (UID: \"ca71404c-eec8-471e-a7ab-89f4ee69b025\") " pod="openshift-marketplace/certified-operators-c89wj" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.579461 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfs8w\" (UniqueName: \"kubernetes.io/projected/ca71404c-eec8-471e-a7ab-89f4ee69b025-kube-api-access-gfs8w\") pod \"certified-operators-c89wj\" (UID: \"ca71404c-eec8-471e-a7ab-89f4ee69b025\") " pod="openshift-marketplace/certified-operators-c89wj" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.579533 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca71404c-eec8-471e-a7ab-89f4ee69b025-catalog-content\") pod \"certified-operators-c89wj\" (UID: \"ca71404c-eec8-471e-a7ab-89f4ee69b025\") " pod="openshift-marketplace/certified-operators-c89wj" Feb 19 21:01:55 crc kubenswrapper[4886]: E0219 21:01:55.580055 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:56.080029527 +0000 UTC m=+146.707872577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.580214 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca71404c-eec8-471e-a7ab-89f4ee69b025-catalog-content\") pod \"certified-operators-c89wj\" (UID: \"ca71404c-eec8-471e-a7ab-89f4ee69b025\") " pod="openshift-marketplace/certified-operators-c89wj" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.580419 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca71404c-eec8-471e-a7ab-89f4ee69b025-utilities\") pod \"certified-operators-c89wj\" (UID: \"ca71404c-eec8-471e-a7ab-89f4ee69b025\") " pod="openshift-marketplace/certified-operators-c89wj" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.614999 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4xpm6"] Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.615817 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xpm6" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.617446 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.622161 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfs8w\" (UniqueName: \"kubernetes.io/projected/ca71404c-eec8-471e-a7ab-89f4ee69b025-kube-api-access-gfs8w\") pod \"certified-operators-c89wj\" (UID: \"ca71404c-eec8-471e-a7ab-89f4ee69b025\") " pod="openshift-marketplace/certified-operators-c89wj" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.645605 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4xpm6"] Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.680407 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/123f03da-c3b1-49dd-aec6-8fd547885851-utilities\") pod \"community-operators-4xpm6\" (UID: \"123f03da-c3b1-49dd-aec6-8fd547885851\") " pod="openshift-marketplace/community-operators-4xpm6" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.680470 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.680519 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26thw\" (UniqueName: \"kubernetes.io/projected/123f03da-c3b1-49dd-aec6-8fd547885851-kube-api-access-26thw\") pod \"community-operators-4xpm6\" (UID: \"123f03da-c3b1-49dd-aec6-8fd547885851\") " pod="openshift-marketplace/community-operators-4xpm6" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.680578 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/123f03da-c3b1-49dd-aec6-8fd547885851-catalog-content\") pod \"community-operators-4xpm6\" (UID: \"123f03da-c3b1-49dd-aec6-8fd547885851\") " pod="openshift-marketplace/community-operators-4xpm6" Feb 19 21:01:55 crc kubenswrapper[4886]: E0219 21:01:55.680810 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:56.180794055 +0000 UTC m=+146.808637105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.704828 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c89wj" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.781042 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:55 crc kubenswrapper[4886]: E0219 21:01:55.781160 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:56.281142692 +0000 UTC m=+146.908985742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.781289 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/123f03da-c3b1-49dd-aec6-8fd547885851-catalog-content\") pod \"community-operators-4xpm6\" (UID: \"123f03da-c3b1-49dd-aec6-8fd547885851\") " pod="openshift-marketplace/community-operators-4xpm6" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.781333 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/123f03da-c3b1-49dd-aec6-8fd547885851-utilities\") pod \"community-operators-4xpm6\" (UID: \"123f03da-c3b1-49dd-aec6-8fd547885851\") " pod="openshift-marketplace/community-operators-4xpm6" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.781370 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.781389 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26thw\" (UniqueName: \"kubernetes.io/projected/123f03da-c3b1-49dd-aec6-8fd547885851-kube-api-access-26thw\") pod \"community-operators-4xpm6\" (UID: \"123f03da-c3b1-49dd-aec6-8fd547885851\") " pod="openshift-marketplace/community-operators-4xpm6" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.781710 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/123f03da-c3b1-49dd-aec6-8fd547885851-catalog-content\") pod \"community-operators-4xpm6\" (UID: \"123f03da-c3b1-49dd-aec6-8fd547885851\") " pod="openshift-marketplace/community-operators-4xpm6" Feb 19 21:01:55 crc kubenswrapper[4886]: E0219 21:01:55.781817 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:56.281788758 +0000 UTC m=+146.909631808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.781888 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/123f03da-c3b1-49dd-aec6-8fd547885851-utilities\") pod \"community-operators-4xpm6\" (UID: \"123f03da-c3b1-49dd-aec6-8fd547885851\") " pod="openshift-marketplace/community-operators-4xpm6" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.783132 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9h6xg"] Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.784026 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h6xg" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.809016 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9h6xg"] Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.817099 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26thw\" (UniqueName: \"kubernetes.io/projected/123f03da-c3b1-49dd-aec6-8fd547885851-kube-api-access-26thw\") pod \"community-operators-4xpm6\" (UID: \"123f03da-c3b1-49dd-aec6-8fd547885851\") " pod="openshift-marketplace/community-operators-4xpm6" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.834122 4886 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.883217 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.883452 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7edb58ac-44cc-4e68-a1f0-3975d303b485-catalog-content\") pod \"certified-operators-9h6xg\" (UID: \"7edb58ac-44cc-4e68-a1f0-3975d303b485\") " pod="openshift-marketplace/certified-operators-9h6xg" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.883502 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7edb58ac-44cc-4e68-a1f0-3975d303b485-utilities\") pod \"certified-operators-9h6xg\" (UID: \"7edb58ac-44cc-4e68-a1f0-3975d303b485\") " pod="openshift-marketplace/certified-operators-9h6xg" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.883570 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d8kf\" (UniqueName: \"kubernetes.io/projected/7edb58ac-44cc-4e68-a1f0-3975d303b485-kube-api-access-9d8kf\") pod \"certified-operators-9h6xg\" (UID: \"7edb58ac-44cc-4e68-a1f0-3975d303b485\") " pod="openshift-marketplace/certified-operators-9h6xg" Feb 19 21:01:55 crc kubenswrapper[4886]: E0219 21:01:55.883712 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:56.383694202 +0000 UTC m=+147.011537252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.957232 4886 patch_prober.go:28] interesting pod/router-default-5444994796-2wz7p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 21:01:55 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Feb 19 21:01:55 crc kubenswrapper[4886]: [+]process-running ok Feb 19 21:01:55 crc kubenswrapper[4886]: healthz check failed Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.957307 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2wz7p" podUID="6b171410-3c97-4589-b62d-2190a13cbb3e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.962924 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xpm6" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.985073 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7edb58ac-44cc-4e68-a1f0-3975d303b485-utilities\") pod \"certified-operators-9h6xg\" (UID: \"7edb58ac-44cc-4e68-a1f0-3975d303b485\") " pod="openshift-marketplace/certified-operators-9h6xg" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.985154 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d8kf\" (UniqueName: \"kubernetes.io/projected/7edb58ac-44cc-4e68-a1f0-3975d303b485-kube-api-access-9d8kf\") pod \"certified-operators-9h6xg\" (UID: \"7edb58ac-44cc-4e68-a1f0-3975d303b485\") " pod="openshift-marketplace/certified-operators-9h6xg" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.985174 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.985219 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7edb58ac-44cc-4e68-a1f0-3975d303b485-catalog-content\") pod \"certified-operators-9h6xg\" (UID: \"7edb58ac-44cc-4e68-a1f0-3975d303b485\") " pod="openshift-marketplace/certified-operators-9h6xg" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.985640 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7edb58ac-44cc-4e68-a1f0-3975d303b485-catalog-content\") pod \"certified-operators-9h6xg\" (UID: \"7edb58ac-44cc-4e68-a1f0-3975d303b485\") " pod="openshift-marketplace/certified-operators-9h6xg" Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.985746 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7edb58ac-44cc-4e68-a1f0-3975d303b485-utilities\") pod \"certified-operators-9h6xg\" (UID: \"7edb58ac-44cc-4e68-a1f0-3975d303b485\") " pod="openshift-marketplace/certified-operators-9h6xg" Feb 19 21:01:55 crc kubenswrapper[4886]: E0219 21:01:55.986024 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:56.486010627 +0000 UTC m=+147.113853677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.988236 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rzcsb"] Feb 19 21:01:55 crc kubenswrapper[4886]: I0219 21:01:55.989332 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rzcsb" Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.003677 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d8kf\" (UniqueName: \"kubernetes.io/projected/7edb58ac-44cc-4e68-a1f0-3975d303b485-kube-api-access-9d8kf\") pod \"certified-operators-9h6xg\" (UID: \"7edb58ac-44cc-4e68-a1f0-3975d303b485\") " pod="openshift-marketplace/certified-operators-9h6xg" Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.010024 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rzcsb"] Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.084884 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c89wj"] Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.086200 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.086429 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b53c853-896f-4711-a16d-5b7981cec96d-utilities\") pod \"community-operators-rzcsb\" (UID: \"1b53c853-896f-4711-a16d-5b7981cec96d\") " pod="openshift-marketplace/community-operators-rzcsb" Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.086467 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cslvp\" (UniqueName: \"kubernetes.io/projected/1b53c853-896f-4711-a16d-5b7981cec96d-kube-api-access-cslvp\") pod \"community-operators-rzcsb\" (UID: \"1b53c853-896f-4711-a16d-5b7981cec96d\") " pod="openshift-marketplace/community-operators-rzcsb" Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.086542 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b53c853-896f-4711-a16d-5b7981cec96d-catalog-content\") pod \"community-operators-rzcsb\" (UID: \"1b53c853-896f-4711-a16d-5b7981cec96d\") " pod="openshift-marketplace/community-operators-rzcsb" Feb 19 21:01:56 crc kubenswrapper[4886]: E0219 21:01:56.086652 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:56.586637651 +0000 UTC m=+147.214480691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.095787 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h6xg" Feb 19 21:01:56 crc kubenswrapper[4886]: W0219 21:01:56.096354 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca71404c_eec8_471e_a7ab_89f4ee69b025.slice/crio-a7194a8b0fa540e78ac178fa1d410599d3a7c8067a0ac0029458526ffd6e3d67 WatchSource:0}: Error finding container a7194a8b0fa540e78ac178fa1d410599d3a7c8067a0ac0029458526ffd6e3d67: Status 404 returned error can't find the container with id a7194a8b0fa540e78ac178fa1d410599d3a7c8067a0ac0029458526ffd6e3d67 Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.190858 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b53c853-896f-4711-a16d-5b7981cec96d-utilities\") pod \"community-operators-rzcsb\" (UID: \"1b53c853-896f-4711-a16d-5b7981cec96d\") " pod="openshift-marketplace/community-operators-rzcsb" Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.191225 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cslvp\" (UniqueName: \"kubernetes.io/projected/1b53c853-896f-4711-a16d-5b7981cec96d-kube-api-access-cslvp\") pod \"community-operators-rzcsb\" (UID: \"1b53c853-896f-4711-a16d-5b7981cec96d\") " pod="openshift-marketplace/community-operators-rzcsb" Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.191278 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.191328 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b53c853-896f-4711-a16d-5b7981cec96d-catalog-content\") pod \"community-operators-rzcsb\" (UID: \"1b53c853-896f-4711-a16d-5b7981cec96d\") " pod="openshift-marketplace/community-operators-rzcsb" Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.191733 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b53c853-896f-4711-a16d-5b7981cec96d-catalog-content\") pod \"community-operators-rzcsb\" (UID: \"1b53c853-896f-4711-a16d-5b7981cec96d\") " pod="openshift-marketplace/community-operators-rzcsb" Feb 19 21:01:56 crc kubenswrapper[4886]: E0219 21:01:56.192167 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:56.692155223 +0000 UTC m=+147.319998273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.192539 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b53c853-896f-4711-a16d-5b7981cec96d-utilities\") pod \"community-operators-rzcsb\" (UID: \"1b53c853-896f-4711-a16d-5b7981cec96d\") " pod="openshift-marketplace/community-operators-rzcsb" Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.200866 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4xpm6"] Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.213714 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cslvp\" (UniqueName: \"kubernetes.io/projected/1b53c853-896f-4711-a16d-5b7981cec96d-kube-api-access-cslvp\") pod \"community-operators-rzcsb\" (UID: \"1b53c853-896f-4711-a16d-5b7981cec96d\") " pod="openshift-marketplace/community-operators-rzcsb" Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.218819 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qjcds" event={"ID":"f21bc18b-845c-491a-8d27-4cdb035e26bc","Type":"ContainerStarted","Data":"844f15be4cba73ef987746eedf2bdc2c7ebf12b9ebabb84ee5a54edc1d36635c"} Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.218871 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qjcds" event={"ID":"f21bc18b-845c-491a-8d27-4cdb035e26bc","Type":"ContainerStarted","Data":"808c748e87d9aff0ebe8a0de004ee99874c7821e2fa17bb4ee3a9e3483f5d822"} Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.223563 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c89wj" event={"ID":"ca71404c-eec8-471e-a7ab-89f4ee69b025","Type":"ContainerStarted","Data":"a7194a8b0fa540e78ac178fa1d410599d3a7c8067a0ac0029458526ffd6e3d67"} Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.292695 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:56 crc kubenswrapper[4886]: E0219 21:01:56.294062 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:56.794043147 +0000 UTC m=+147.421886197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.331087 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rzcsb" Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.394067 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:56 crc kubenswrapper[4886]: E0219 21:01:56.394409 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:56.894393674 +0000 UTC m=+147.522236724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.421435 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-qjcds" podStartSLOduration=10.421396942 podStartE2EDuration="10.421396942s" podCreationTimestamp="2026-02-19 21:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:56.24083745 +0000 UTC m=+146.868680500" watchObservedRunningTime="2026-02-19 21:01:56.421396942 +0000 UTC m=+147.049240002" Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.427397 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9h6xg"] Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.495466 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:56 crc kubenswrapper[4886]: E0219 21:01:56.495839 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:56.995824528 +0000 UTC m=+147.623667578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.597491 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:56 crc kubenswrapper[4886]: E0219 21:01:56.598035 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:57.098018309 +0000 UTC m=+147.725861379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.699089 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:56 crc kubenswrapper[4886]: E0219 21:01:56.699232 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 21:01:57.199206887 +0000 UTC m=+147.827049937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.699393 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:56 crc kubenswrapper[4886]: E0219 21:01:56.699713 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 21:01:57.199703619 +0000 UTC m=+147.827546739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tp6s4" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.736244 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rzcsb"] Feb 19 21:01:56 crc kubenswrapper[4886]: W0219 21:01:56.743011 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b53c853_896f_4711_a16d_5b7981cec96d.slice/crio-09e07d563fa5b9ccf62777bd108ced5e2aaffafe42ad8d7e2c88b9d96dac610b WatchSource:0}: Error finding container 09e07d563fa5b9ccf62777bd108ced5e2aaffafe42ad8d7e2c88b9d96dac610b: Status 404 returned error can't find the container with id 09e07d563fa5b9ccf62777bd108ced5e2aaffafe42ad8d7e2c88b9d96dac610b Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.777015 4886 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-19T21:01:55.834288597Z","Handler":null,"Name":""} Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.783670 4886 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.783728 4886 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.800926 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.808196 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.903082 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.909853 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.909902 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.944185 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tp6s4\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.957577 4886 patch_prober.go:28] interesting pod/router-default-5444994796-2wz7p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 21:01:56 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Feb 19 21:01:56 crc kubenswrapper[4886]: [+]process-running ok Feb 19 21:01:56 crc kubenswrapper[4886]: healthz check failed Feb 19 21:01:56 crc kubenswrapper[4886]: I0219 21:01:56.957643 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2wz7p" podUID="6b171410-3c97-4589-b62d-2190a13cbb3e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.123580 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.261029 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rzcsb" event={"ID":"1b53c853-896f-4711-a16d-5b7981cec96d","Type":"ContainerDied","Data":"2dcd5dee7d119f33574b250ff4ebbf38a8445b0fae3872e2a2bf5b1c69e92a46"} Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.261233 4886 generic.go:334] "Generic (PLEG): container finished" podID="1b53c853-896f-4711-a16d-5b7981cec96d" containerID="2dcd5dee7d119f33574b250ff4ebbf38a8445b0fae3872e2a2bf5b1c69e92a46" exitCode=0 Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.261318 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rzcsb" event={"ID":"1b53c853-896f-4711-a16d-5b7981cec96d","Type":"ContainerStarted","Data":"09e07d563fa5b9ccf62777bd108ced5e2aaffafe42ad8d7e2c88b9d96dac610b"} Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.264991 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.265840 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-5pbww" event={"ID":"202fcc8c-4e14-4336-aaef-22f33ff09ece","Type":"ContainerDied","Data":"f2f0373543ca7cabe0012cb73b9e89e2ae85e1ff70a900656a93a9716b233cae"} Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.265770 4886 generic.go:334] "Generic (PLEG): container finished" podID="202fcc8c-4e14-4336-aaef-22f33ff09ece" containerID="f2f0373543ca7cabe0012cb73b9e89e2ae85e1ff70a900656a93a9716b233cae" exitCode=0 Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.268510 4886 generic.go:334] "Generic (PLEG): container finished" podID="123f03da-c3b1-49dd-aec6-8fd547885851" containerID="f5b030c5efaf602ff7db5298cc8a544cdbd095ac6315628fbe17f72df0a713d8" exitCode=0 Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.268612 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xpm6" event={"ID":"123f03da-c3b1-49dd-aec6-8fd547885851","Type":"ContainerDied","Data":"f5b030c5efaf602ff7db5298cc8a544cdbd095ac6315628fbe17f72df0a713d8"} Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.268651 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xpm6" event={"ID":"123f03da-c3b1-49dd-aec6-8fd547885851","Type":"ContainerStarted","Data":"99aeeaaf526436a99f5c88d6b08e79a4ac45f4ad4ccd8d8f078fe827f7cee522"} Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.272549 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c89wj" event={"ID":"ca71404c-eec8-471e-a7ab-89f4ee69b025","Type":"ContainerDied","Data":"9ea873c1a04e2611412158e05f7c0b0e49df068ecc21256826fbb51d11c24e8c"} Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.272510 4886 generic.go:334] "Generic (PLEG): container finished" podID="ca71404c-eec8-471e-a7ab-89f4ee69b025" containerID="9ea873c1a04e2611412158e05f7c0b0e49df068ecc21256826fbb51d11c24e8c" exitCode=0 Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.278003 4886 generic.go:334] "Generic (PLEG): container finished" podID="7edb58ac-44cc-4e68-a1f0-3975d303b485" containerID="01fb484fdb05ba446696f30090f7ceef6856a6d7708e2fc0ba777eaa566044b7" exitCode=0 Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.278299 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h6xg" event={"ID":"7edb58ac-44cc-4e68-a1f0-3975d303b485","Type":"ContainerDied","Data":"01fb484fdb05ba446696f30090f7ceef6856a6d7708e2fc0ba777eaa566044b7"} Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.278346 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h6xg" event={"ID":"7edb58ac-44cc-4e68-a1f0-3975d303b485","Type":"ContainerStarted","Data":"81ba2bec114bffaead02339a24e796ede4372b3fd072cd7e1d4db826d02b1048"} Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.463654 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tp6s4"] Feb 19 21:01:57 crc kubenswrapper[4886]: W0219 21:01:57.478848 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7013b72c_2c60_4174_b7e9_a62de8263d50.slice/crio-0c2b224cc92010b4fd5535886a68f3302d55d5c6eb4b0181fc9dc60941e27e89 WatchSource:0}: Error finding container 0c2b224cc92010b4fd5535886a68f3302d55d5c6eb4b0181fc9dc60941e27e89: Status 404 returned error can't find the container with id 0c2b224cc92010b4fd5535886a68f3302d55d5c6eb4b0181fc9dc60941e27e89 Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.506590 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.507185 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.509353 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.510802 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.539983 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.581030 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k7qh8"] Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.582457 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7qh8" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.585144 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.593666 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7qh8"] Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.613753 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc2e69b-08e3-4d9d-b21d-755914032707-catalog-content\") pod \"redhat-marketplace-k7qh8\" (UID: \"ffc2e69b-08e3-4d9d-b21d-755914032707\") " pod="openshift-marketplace/redhat-marketplace-k7qh8" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.613851 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm5kz\" (UniqueName: \"kubernetes.io/projected/ffc2e69b-08e3-4d9d-b21d-755914032707-kube-api-access-lm5kz\") pod \"redhat-marketplace-k7qh8\" (UID: \"ffc2e69b-08e3-4d9d-b21d-755914032707\") " pod="openshift-marketplace/redhat-marketplace-k7qh8" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.613880 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.613908 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc2e69b-08e3-4d9d-b21d-755914032707-utilities\") pod \"redhat-marketplace-k7qh8\" (UID: \"ffc2e69b-08e3-4d9d-b21d-755914032707\") " pod="openshift-marketplace/redhat-marketplace-k7qh8" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.613929 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.715132 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm5kz\" (UniqueName: \"kubernetes.io/projected/ffc2e69b-08e3-4d9d-b21d-755914032707-kube-api-access-lm5kz\") pod \"redhat-marketplace-k7qh8\" (UID: \"ffc2e69b-08e3-4d9d-b21d-755914032707\") " pod="openshift-marketplace/redhat-marketplace-k7qh8" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.715195 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.715235 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc2e69b-08e3-4d9d-b21d-755914032707-utilities\") pod \"redhat-marketplace-k7qh8\" (UID: \"ffc2e69b-08e3-4d9d-b21d-755914032707\") " pod="openshift-marketplace/redhat-marketplace-k7qh8" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.715275 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.715365 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.716191 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc2e69b-08e3-4d9d-b21d-755914032707-catalog-content\") pod \"redhat-marketplace-k7qh8\" (UID: \"ffc2e69b-08e3-4d9d-b21d-755914032707\") " pod="openshift-marketplace/redhat-marketplace-k7qh8" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.716512 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc2e69b-08e3-4d9d-b21d-755914032707-utilities\") pod \"redhat-marketplace-k7qh8\" (UID: \"ffc2e69b-08e3-4d9d-b21d-755914032707\") " pod="openshift-marketplace/redhat-marketplace-k7qh8" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.716556 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc2e69b-08e3-4d9d-b21d-755914032707-catalog-content\") pod \"redhat-marketplace-k7qh8\" (UID: \"ffc2e69b-08e3-4d9d-b21d-755914032707\") " pod="openshift-marketplace/redhat-marketplace-k7qh8" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.734313 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm5kz\" (UniqueName: \"kubernetes.io/projected/ffc2e69b-08e3-4d9d-b21d-755914032707-kube-api-access-lm5kz\") pod \"redhat-marketplace-k7qh8\" (UID: \"ffc2e69b-08e3-4d9d-b21d-755914032707\") " pod="openshift-marketplace/redhat-marketplace-k7qh8" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.736196 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.852270 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.899411 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7qh8" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.957924 4886 patch_prober.go:28] interesting pod/router-default-5444994796-2wz7p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 21:01:57 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Feb 19 21:01:57 crc kubenswrapper[4886]: [+]process-running ok Feb 19 21:01:57 crc kubenswrapper[4886]: healthz check failed Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.957990 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2wz7p" podUID="6b171410-3c97-4589-b62d-2190a13cbb3e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.982206 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tdbkd"] Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.983174 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tdbkd" Feb 19 21:01:57 crc kubenswrapper[4886]: I0219 21:01:57.993983 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tdbkd"] Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.019815 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86bcf599-6058-4716-a144-d2dcb927c498-catalog-content\") pod \"redhat-marketplace-tdbkd\" (UID: \"86bcf599-6058-4716-a144-d2dcb927c498\") " pod="openshift-marketplace/redhat-marketplace-tdbkd" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.019928 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p885k\" (UniqueName: \"kubernetes.io/projected/86bcf599-6058-4716-a144-d2dcb927c498-kube-api-access-p885k\") pod \"redhat-marketplace-tdbkd\" (UID: \"86bcf599-6058-4716-a144-d2dcb927c498\") " pod="openshift-marketplace/redhat-marketplace-tdbkd" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.019989 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86bcf599-6058-4716-a144-d2dcb927c498-utilities\") pod \"redhat-marketplace-tdbkd\" (UID: \"86bcf599-6058-4716-a144-d2dcb927c498\") " pod="openshift-marketplace/redhat-marketplace-tdbkd" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.123018 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p885k\" (UniqueName: \"kubernetes.io/projected/86bcf599-6058-4716-a144-d2dcb927c498-kube-api-access-p885k\") pod \"redhat-marketplace-tdbkd\" (UID: \"86bcf599-6058-4716-a144-d2dcb927c498\") " pod="openshift-marketplace/redhat-marketplace-tdbkd" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.123403 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86bcf599-6058-4716-a144-d2dcb927c498-utilities\") pod \"redhat-marketplace-tdbkd\" (UID: \"86bcf599-6058-4716-a144-d2dcb927c498\") " pod="openshift-marketplace/redhat-marketplace-tdbkd" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.123447 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86bcf599-6058-4716-a144-d2dcb927c498-catalog-content\") pod \"redhat-marketplace-tdbkd\" (UID: \"86bcf599-6058-4716-a144-d2dcb927c498\") " pod="openshift-marketplace/redhat-marketplace-tdbkd" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.123849 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86bcf599-6058-4716-a144-d2dcb927c498-catalog-content\") pod \"redhat-marketplace-tdbkd\" (UID: \"86bcf599-6058-4716-a144-d2dcb927c498\") " pod="openshift-marketplace/redhat-marketplace-tdbkd" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.124159 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86bcf599-6058-4716-a144-d2dcb927c498-utilities\") pod \"redhat-marketplace-tdbkd\" (UID: \"86bcf599-6058-4716-a144-d2dcb927c498\") " pod="openshift-marketplace/redhat-marketplace-tdbkd" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.141153 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7qh8"] Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.173419 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p885k\" (UniqueName: \"kubernetes.io/projected/86bcf599-6058-4716-a144-d2dcb927c498-kube-api-access-p885k\") pod \"redhat-marketplace-tdbkd\" (UID: \"86bcf599-6058-4716-a144-d2dcb927c498\") " pod="openshift-marketplace/redhat-marketplace-tdbkd" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.272665 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.295582 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7qh8" event={"ID":"ffc2e69b-08e3-4d9d-b21d-755914032707","Type":"ContainerStarted","Data":"9a53e036e29509dd05df65dff7d2b09e3cd5b26a5591e5a8040e50cef3d87c53"} Feb 19 21:01:58 crc kubenswrapper[4886]: W0219 21:01:58.296317 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf7200c4a_1ba1_40e5_bb4e_6f4ca3829d92.slice/crio-3d90526750124e9c925aa106c1e19516e2845e151f4041ef79ade284079615fa WatchSource:0}: Error finding container 3d90526750124e9c925aa106c1e19516e2845e151f4041ef79ade284079615fa: Status 404 returned error can't find the container with id 3d90526750124e9c925aa106c1e19516e2845e151f4041ef79ade284079615fa Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.303977 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tdbkd" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.305252 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" event={"ID":"7013b72c-2c60-4174-b7e9-a62de8263d50","Type":"ContainerStarted","Data":"d1ab94ca338e010d51fde28548ff9642099171bfdd1a498addb59ab6c3e79bb3"} Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.305313 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" event={"ID":"7013b72c-2c60-4174-b7e9-a62de8263d50","Type":"ContainerStarted","Data":"0c2b224cc92010b4fd5535886a68f3302d55d5c6eb4b0181fc9dc60941e27e89"} Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.330113 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" podStartSLOduration=125.330098113 podStartE2EDuration="2m5.330098113s" podCreationTimestamp="2026-02-19 20:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:58.329237452 +0000 UTC m=+148.957080502" watchObservedRunningTime="2026-02-19 21:01:58.330098113 +0000 UTC m=+148.957941163" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.583728 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dgcgg"] Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.587694 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dgcgg" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.590798 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dgcgg"] Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.601013 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.625388 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.633215 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psv2l\" (UniqueName: \"kubernetes.io/projected/eafab696-8d58-4612-97af-abb4fea7dd97-kube-api-access-psv2l\") pod \"redhat-operators-dgcgg\" (UID: \"eafab696-8d58-4612-97af-abb4fea7dd97\") " pod="openshift-marketplace/redhat-operators-dgcgg" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.633252 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eafab696-8d58-4612-97af-abb4fea7dd97-utilities\") pod \"redhat-operators-dgcgg\" (UID: \"eafab696-8d58-4612-97af-abb4fea7dd97\") " pod="openshift-marketplace/redhat-operators-dgcgg" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.633308 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eafab696-8d58-4612-97af-abb4fea7dd97-catalog-content\") pod \"redhat-operators-dgcgg\" (UID: \"eafab696-8d58-4612-97af-abb4fea7dd97\") " pod="openshift-marketplace/redhat-operators-dgcgg" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.687055 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-5pbww" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.708405 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tdbkd"] Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.736589 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/202fcc8c-4e14-4336-aaef-22f33ff09ece-config-volume\") pod \"202fcc8c-4e14-4336-aaef-22f33ff09ece\" (UID: \"202fcc8c-4e14-4336-aaef-22f33ff09ece\") " Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.736635 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/202fcc8c-4e14-4336-aaef-22f33ff09ece-secret-volume\") pod \"202fcc8c-4e14-4336-aaef-22f33ff09ece\" (UID: \"202fcc8c-4e14-4336-aaef-22f33ff09ece\") " Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.736730 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgz4k\" (UniqueName: \"kubernetes.io/projected/202fcc8c-4e14-4336-aaef-22f33ff09ece-kube-api-access-fgz4k\") pod \"202fcc8c-4e14-4336-aaef-22f33ff09ece\" (UID: \"202fcc8c-4e14-4336-aaef-22f33ff09ece\") " Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.736939 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psv2l\" (UniqueName: \"kubernetes.io/projected/eafab696-8d58-4612-97af-abb4fea7dd97-kube-api-access-psv2l\") pod \"redhat-operators-dgcgg\" (UID: \"eafab696-8d58-4612-97af-abb4fea7dd97\") " pod="openshift-marketplace/redhat-operators-dgcgg" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.736977 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eafab696-8d58-4612-97af-abb4fea7dd97-utilities\") pod \"redhat-operators-dgcgg\" (UID: \"eafab696-8d58-4612-97af-abb4fea7dd97\") " pod="openshift-marketplace/redhat-operators-dgcgg" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.737019 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eafab696-8d58-4612-97af-abb4fea7dd97-catalog-content\") pod \"redhat-operators-dgcgg\" (UID: \"eafab696-8d58-4612-97af-abb4fea7dd97\") " pod="openshift-marketplace/redhat-operators-dgcgg" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.753079 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eafab696-8d58-4612-97af-abb4fea7dd97-utilities\") pod \"redhat-operators-dgcgg\" (UID: \"eafab696-8d58-4612-97af-abb4fea7dd97\") " pod="openshift-marketplace/redhat-operators-dgcgg" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.753962 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eafab696-8d58-4612-97af-abb4fea7dd97-catalog-content\") pod \"redhat-operators-dgcgg\" (UID: \"eafab696-8d58-4612-97af-abb4fea7dd97\") " pod="openshift-marketplace/redhat-operators-dgcgg" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.754605 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/202fcc8c-4e14-4336-aaef-22f33ff09ece-config-volume" (OuterVolumeSpecName: "config-volume") pod "202fcc8c-4e14-4336-aaef-22f33ff09ece" (UID: "202fcc8c-4e14-4336-aaef-22f33ff09ece"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.755419 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psv2l\" (UniqueName: \"kubernetes.io/projected/eafab696-8d58-4612-97af-abb4fea7dd97-kube-api-access-psv2l\") pod \"redhat-operators-dgcgg\" (UID: \"eafab696-8d58-4612-97af-abb4fea7dd97\") " pod="openshift-marketplace/redhat-operators-dgcgg" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.760135 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/202fcc8c-4e14-4336-aaef-22f33ff09ece-kube-api-access-fgz4k" (OuterVolumeSpecName: "kube-api-access-fgz4k") pod "202fcc8c-4e14-4336-aaef-22f33ff09ece" (UID: "202fcc8c-4e14-4336-aaef-22f33ff09ece"). InnerVolumeSpecName "kube-api-access-fgz4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.763228 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/202fcc8c-4e14-4336-aaef-22f33ff09ece-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "202fcc8c-4e14-4336-aaef-22f33ff09ece" (UID: "202fcc8c-4e14-4336-aaef-22f33ff09ece"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.838390 4886 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/202fcc8c-4e14-4336-aaef-22f33ff09ece-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.838423 4886 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/202fcc8c-4e14-4336-aaef-22f33ff09ece-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.838434 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgz4k\" (UniqueName: \"kubernetes.io/projected/202fcc8c-4e14-4336-aaef-22f33ff09ece-kube-api-access-fgz4k\") on node \"crc\" DevicePath \"\"" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.956060 4886 patch_prober.go:28] interesting pod/router-default-5444994796-2wz7p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 21:01:58 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Feb 19 21:01:58 crc kubenswrapper[4886]: [+]process-running ok Feb 19 21:01:58 crc kubenswrapper[4886]: healthz check failed Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.956175 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2wz7p" podUID="6b171410-3c97-4589-b62d-2190a13cbb3e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.969426 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dgcgg" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.987690 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-chw2d"] Feb 19 21:01:58 crc kubenswrapper[4886]: E0219 21:01:58.988046 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="202fcc8c-4e14-4336-aaef-22f33ff09ece" containerName="collect-profiles" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.988065 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="202fcc8c-4e14-4336-aaef-22f33ff09ece" containerName="collect-profiles" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.988188 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="202fcc8c-4e14-4336-aaef-22f33ff09ece" containerName="collect-profiles" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.988974 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chw2d" Feb 19 21:01:58 crc kubenswrapper[4886]: I0219 21:01:58.990428 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-chw2d"] Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.044019 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13570438-2f8f-4ad0-8320-a6fa737bf816-utilities\") pod \"redhat-operators-chw2d\" (UID: \"13570438-2f8f-4ad0-8320-a6fa737bf816\") " pod="openshift-marketplace/redhat-operators-chw2d" Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.044082 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13570438-2f8f-4ad0-8320-a6fa737bf816-catalog-content\") pod \"redhat-operators-chw2d\" (UID: \"13570438-2f8f-4ad0-8320-a6fa737bf816\") " pod="openshift-marketplace/redhat-operators-chw2d" Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.044122 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkxk4\" (UniqueName: \"kubernetes.io/projected/13570438-2f8f-4ad0-8320-a6fa737bf816-kube-api-access-xkxk4\") pod \"redhat-operators-chw2d\" (UID: \"13570438-2f8f-4ad0-8320-a6fa737bf816\") " pod="openshift-marketplace/redhat-operators-chw2d" Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.146837 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkxk4\" (UniqueName: \"kubernetes.io/projected/13570438-2f8f-4ad0-8320-a6fa737bf816-kube-api-access-xkxk4\") pod \"redhat-operators-chw2d\" (UID: \"13570438-2f8f-4ad0-8320-a6fa737bf816\") " pod="openshift-marketplace/redhat-operators-chw2d" Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.146993 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13570438-2f8f-4ad0-8320-a6fa737bf816-utilities\") pod \"redhat-operators-chw2d\" (UID: \"13570438-2f8f-4ad0-8320-a6fa737bf816\") " pod="openshift-marketplace/redhat-operators-chw2d" Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.147019 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13570438-2f8f-4ad0-8320-a6fa737bf816-catalog-content\") pod \"redhat-operators-chw2d\" (UID: \"13570438-2f8f-4ad0-8320-a6fa737bf816\") " pod="openshift-marketplace/redhat-operators-chw2d" Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.147447 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13570438-2f8f-4ad0-8320-a6fa737bf816-utilities\") pod \"redhat-operators-chw2d\" (UID: \"13570438-2f8f-4ad0-8320-a6fa737bf816\") " pod="openshift-marketplace/redhat-operators-chw2d" Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.147493 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13570438-2f8f-4ad0-8320-a6fa737bf816-catalog-content\") pod \"redhat-operators-chw2d\" (UID: \"13570438-2f8f-4ad0-8320-a6fa737bf816\") " pod="openshift-marketplace/redhat-operators-chw2d" Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.168297 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkxk4\" (UniqueName: \"kubernetes.io/projected/13570438-2f8f-4ad0-8320-a6fa737bf816-kube-api-access-xkxk4\") pod \"redhat-operators-chw2d\" (UID: \"13570438-2f8f-4ad0-8320-a6fa737bf816\") " pod="openshift-marketplace/redhat-operators-chw2d" Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.283168 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.283209 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.296460 4886 patch_prober.go:28] interesting pod/console-f9d7485db-rn5ms container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.296516 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-rn5ms" podUID="d03f4c56-f429-4911-814d-02610d24f7ec" containerName="console" probeResult="failure" output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.318696 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-5pbww" event={"ID":"202fcc8c-4e14-4336-aaef-22f33ff09ece","Type":"ContainerDied","Data":"2bb2cae5a335cf79ddf50032b6eedc51a3d6b83f20b545c8dca3fea682f546b8"} Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.318732 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bb2cae5a335cf79ddf50032b6eedc51a3d6b83f20b545c8dca3fea682f546b8" Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.318776 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-5pbww" Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.326600 4886 generic.go:334] "Generic (PLEG): container finished" podID="ffc2e69b-08e3-4d9d-b21d-755914032707" containerID="7b92d950939d798b595a45955a1f5b1011abf038889d11b3a34d5af0319424aa" exitCode=0 Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.327154 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7qh8" event={"ID":"ffc2e69b-08e3-4d9d-b21d-755914032707","Type":"ContainerDied","Data":"7b92d950939d798b595a45955a1f5b1011abf038889d11b3a34d5af0319424aa"} Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.327445 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chw2d" Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.347279 4886 generic.go:334] "Generic (PLEG): container finished" podID="f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92" containerID="3f4dac6ba4fa771fa579b9e85d73e4bcc63fe9e14e6e233ca024e87a288cecad" exitCode=0 Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.347443 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92","Type":"ContainerDied","Data":"3f4dac6ba4fa771fa579b9e85d73e4bcc63fe9e14e6e233ca024e87a288cecad"} Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.347470 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92","Type":"ContainerStarted","Data":"3d90526750124e9c925aa106c1e19516e2845e151f4041ef79ade284079615fa"} Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.355150 4886 generic.go:334] "Generic (PLEG): container finished" podID="86bcf599-6058-4716-a144-d2dcb927c498" containerID="60c811e4954d7703a8f176d97ed7e3b652e488a15c5a9b7ebe05a296b5b934ef" exitCode=0 Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.359401 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tdbkd" event={"ID":"86bcf599-6058-4716-a144-d2dcb927c498","Type":"ContainerDied","Data":"60c811e4954d7703a8f176d97ed7e3b652e488a15c5a9b7ebe05a296b5b934ef"} Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.359466 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tdbkd" event={"ID":"86bcf599-6058-4716-a144-d2dcb927c498","Type":"ContainerStarted","Data":"6233a8b9f18a619b02d324f49fe6f596b65cf55fbec78c5f4112f5688edd2750"} Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.359799 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.449962 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dgcgg"] Feb 19 21:01:59 crc kubenswrapper[4886]: W0219 21:01:59.458575 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeafab696_8d58_4612_97af_abb4fea7dd97.slice/crio-9090515f9202b3732383afad462069ec2ed94760cfbdc6cc3071f93801781891 WatchSource:0}: Error finding container 9090515f9202b3732383afad462069ec2ed94760cfbdc6cc3071f93801781891: Status 404 returned error can't find the container with id 9090515f9202b3732383afad462069ec2ed94760cfbdc6cc3071f93801781891 Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.566948 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.576464 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-sk46q" Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.596101 4886 patch_prober.go:28] interesting pod/downloads-7954f5f757-6lvw2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.596150 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6lvw2" podUID="b77d3d20-a193-4bf3-a448-e48059491a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.596252 4886 patch_prober.go:28] interesting pod/downloads-7954f5f757-6lvw2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.596290 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6lvw2" podUID="b77d3d20-a193-4bf3-a448-e48059491a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.736990 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-chw2d"] Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.745538 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.952801 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.955801 4886 patch_prober.go:28] interesting pod/router-default-5444994796-2wz7p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 21:01:59 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Feb 19 21:01:59 crc kubenswrapper[4886]: [+]process-running ok Feb 19 21:01:59 crc kubenswrapper[4886]: healthz check failed Feb 19 21:01:59 crc kubenswrapper[4886]: I0219 21:01:59.955877 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2wz7p" podUID="6b171410-3c97-4589-b62d-2190a13cbb3e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.012613 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-nqhlp" Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.363903 4886 generic.go:334] "Generic (PLEG): container finished" podID="13570438-2f8f-4ad0-8320-a6fa737bf816" containerID="0350bdce40907c44664f35ad4f8d8bcba30691cbf147e2200e64eef35101dffb" exitCode=0 Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.364371 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chw2d" event={"ID":"13570438-2f8f-4ad0-8320-a6fa737bf816","Type":"ContainerDied","Data":"0350bdce40907c44664f35ad4f8d8bcba30691cbf147e2200e64eef35101dffb"} Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.364398 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chw2d" event={"ID":"13570438-2f8f-4ad0-8320-a6fa737bf816","Type":"ContainerStarted","Data":"b3f04439f7532627f547146e982ecaab40ed6433bacb88bc212d809585c1e394"} Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.368231 4886 generic.go:334] "Generic (PLEG): container finished" podID="eafab696-8d58-4612-97af-abb4fea7dd97" containerID="f85bf342f1a896a91a3c11ef6b2432726ba7b8ca8a5bbf7f65d3b9e0630db02f" exitCode=0 Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.368315 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgcgg" event={"ID":"eafab696-8d58-4612-97af-abb4fea7dd97","Type":"ContainerDied","Data":"f85bf342f1a896a91a3c11ef6b2432726ba7b8ca8a5bbf7f65d3b9e0630db02f"} Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.368338 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgcgg" event={"ID":"eafab696-8d58-4612-97af-abb4fea7dd97","Type":"ContainerStarted","Data":"9090515f9202b3732383afad462069ec2ed94760cfbdc6cc3071f93801781891"} Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.567926 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.568037 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.569752 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.594510 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.625501 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.669064 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92-kubelet-dir\") pod \"f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92\" (UID: \"f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92\") " Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.669139 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92-kube-api-access\") pod \"f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92\" (UID: \"f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92\") " Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.669778 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.669819 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.676170 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.676209 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92" (UID: "f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.685787 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.687244 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92" (UID: "f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.742246 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.770738 4886 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.770765 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.782316 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.798997 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.956045 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 21:02:00 crc kubenswrapper[4886]: I0219 21:02:00.960571 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 21:02:01 crc kubenswrapper[4886]: I0219 21:02:01.015399 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 21:02:01 crc kubenswrapper[4886]: E0219 21:02:01.015646 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92" containerName="pruner" Feb 19 21:02:01 crc kubenswrapper[4886]: I0219 21:02:01.015658 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92" containerName="pruner" Feb 19 21:02:01 crc kubenswrapper[4886]: I0219 21:02:01.015759 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92" containerName="pruner" Feb 19 21:02:01 crc kubenswrapper[4886]: I0219 21:02:01.016172 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 21:02:01 crc kubenswrapper[4886]: I0219 21:02:01.016245 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 21:02:01 crc kubenswrapper[4886]: I0219 21:02:01.019024 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 21:02:01 crc kubenswrapper[4886]: I0219 21:02:01.020243 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 21:02:01 crc kubenswrapper[4886]: I0219 21:02:01.075112 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/313779c6-0e3d-494c-a7d1-53f1715657af-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"313779c6-0e3d-494c-a7d1-53f1715657af\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 21:02:01 crc kubenswrapper[4886]: I0219 21:02:01.075162 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/313779c6-0e3d-494c-a7d1-53f1715657af-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"313779c6-0e3d-494c-a7d1-53f1715657af\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 21:02:01 crc kubenswrapper[4886]: I0219 21:02:01.175730 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/313779c6-0e3d-494c-a7d1-53f1715657af-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"313779c6-0e3d-494c-a7d1-53f1715657af\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 21:02:01 crc kubenswrapper[4886]: I0219 21:02:01.175788 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/313779c6-0e3d-494c-a7d1-53f1715657af-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"313779c6-0e3d-494c-a7d1-53f1715657af\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 21:02:01 crc kubenswrapper[4886]: I0219 21:02:01.175879 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/313779c6-0e3d-494c-a7d1-53f1715657af-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"313779c6-0e3d-494c-a7d1-53f1715657af\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 21:02:01 crc kubenswrapper[4886]: I0219 21:02:01.203249 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/313779c6-0e3d-494c-a7d1-53f1715657af-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"313779c6-0e3d-494c-a7d1-53f1715657af\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 21:02:01 crc kubenswrapper[4886]: W0219 21:02:01.325728 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-5c2af20c4451e823468ffa686b27391b59000e6c5c03dc35514d1e9115bc4e66 WatchSource:0}: Error finding container 5c2af20c4451e823468ffa686b27391b59000e6c5c03dc35514d1e9115bc4e66: Status 404 returned error can't find the container with id 5c2af20c4451e823468ffa686b27391b59000e6c5c03dc35514d1e9115bc4e66 Feb 19 21:02:01 crc kubenswrapper[4886]: I0219 21:02:01.342770 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 21:02:01 crc kubenswrapper[4886]: I0219 21:02:01.407508 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f7200c4a-1ba1-40e5-bb4e-6f4ca3829d92","Type":"ContainerDied","Data":"3d90526750124e9c925aa106c1e19516e2845e151f4041ef79ade284079615fa"} Feb 19 21:02:01 crc kubenswrapper[4886]: I0219 21:02:01.407556 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d90526750124e9c925aa106c1e19516e2845e151f4041ef79ade284079615fa" Feb 19 21:02:01 crc kubenswrapper[4886]: I0219 21:02:01.407558 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 21:02:01 crc kubenswrapper[4886]: I0219 21:02:01.410415 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"fc3b74dd1415e1953bc3a551d7fc0577deb100abb5d76fecdce1b25e6fb3e0c9"} Feb 19 21:02:01 crc kubenswrapper[4886]: I0219 21:02:01.432464 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5c2af20c4451e823468ffa686b27391b59000e6c5c03dc35514d1e9115bc4e66"} Feb 19 21:02:01 crc kubenswrapper[4886]: W0219 21:02:01.435992 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-0adbde1b4ca3be2d96e7f2614139e4d7b4e790b871e727a5f46e4a4cf5280a8e WatchSource:0}: Error finding container 0adbde1b4ca3be2d96e7f2614139e4d7b4e790b871e727a5f46e4a4cf5280a8e: Status 404 returned error can't find the container with id 0adbde1b4ca3be2d96e7f2614139e4d7b4e790b871e727a5f46e4a4cf5280a8e Feb 19 21:02:01 crc kubenswrapper[4886]: I0219 21:02:01.736162 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 21:02:01 crc kubenswrapper[4886]: W0219 21:02:01.833330 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod313779c6_0e3d_494c_a7d1_53f1715657af.slice/crio-a383f186a94b216d11dcfbe62bdaa85b9c9256aa5cd21fe4d5dde9cab5f0f220 WatchSource:0}: Error finding container a383f186a94b216d11dcfbe62bdaa85b9c9256aa5cd21fe4d5dde9cab5f0f220: Status 404 returned error can't find the container with id a383f186a94b216d11dcfbe62bdaa85b9c9256aa5cd21fe4d5dde9cab5f0f220 Feb 19 21:02:01 crc kubenswrapper[4886]: I0219 21:02:01.859539 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-qvvdv" Feb 19 21:02:02 crc kubenswrapper[4886]: I0219 21:02:02.443475 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2732f4f0c97dbff4a3f8d1d6bb19107f7c433298669a2475680940b33792fbe6"} Feb 19 21:02:02 crc kubenswrapper[4886]: I0219 21:02:02.455503 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"66ef1f193146091bbc6806b278d9a706530dec0a0560abf093498371f4e85430"} Feb 19 21:02:02 crc kubenswrapper[4886]: I0219 21:02:02.455550 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0adbde1b4ca3be2d96e7f2614139e4d7b4e790b871e727a5f46e4a4cf5280a8e"} Feb 19 21:02:02 crc kubenswrapper[4886]: I0219 21:02:02.473683 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"38915abefe2df69876199e70d56d096cf4dd4ef9a219ca9a251da6f70dcf8395"} Feb 19 21:02:02 crc kubenswrapper[4886]: I0219 21:02:02.474208 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:02:02 crc kubenswrapper[4886]: I0219 21:02:02.480147 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"313779c6-0e3d-494c-a7d1-53f1715657af","Type":"ContainerStarted","Data":"a383f186a94b216d11dcfbe62bdaa85b9c9256aa5cd21fe4d5dde9cab5f0f220"} Feb 19 21:02:02 crc kubenswrapper[4886]: I0219 21:02:02.709619 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 21:02:03 crc kubenswrapper[4886]: I0219 21:02:03.515996 4886 generic.go:334] "Generic (PLEG): container finished" podID="313779c6-0e3d-494c-a7d1-53f1715657af" containerID="042c8d277b700d19d6042667b5226788f6583ba83ccb88bdf54faa5f1f8dc3dd" exitCode=0 Feb 19 21:02:03 crc kubenswrapper[4886]: I0219 21:02:03.516050 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"313779c6-0e3d-494c-a7d1-53f1715657af","Type":"ContainerDied","Data":"042c8d277b700d19d6042667b5226788f6583ba83ccb88bdf54faa5f1f8dc3dd"} Feb 19 21:02:09 crc kubenswrapper[4886]: I0219 21:02:09.287583 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:02:09 crc kubenswrapper[4886]: I0219 21:02:09.291143 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:02:09 crc kubenswrapper[4886]: I0219 21:02:09.611035 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-6lvw2" Feb 19 21:02:16 crc kubenswrapper[4886]: I0219 21:02:16.217299 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 21:02:16 crc kubenswrapper[4886]: I0219 21:02:16.407624 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/313779c6-0e3d-494c-a7d1-53f1715657af-kubelet-dir\") pod \"313779c6-0e3d-494c-a7d1-53f1715657af\" (UID: \"313779c6-0e3d-494c-a7d1-53f1715657af\") " Feb 19 21:02:16 crc kubenswrapper[4886]: I0219 21:02:16.407733 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/313779c6-0e3d-494c-a7d1-53f1715657af-kube-api-access\") pod \"313779c6-0e3d-494c-a7d1-53f1715657af\" (UID: \"313779c6-0e3d-494c-a7d1-53f1715657af\") " Feb 19 21:02:16 crc kubenswrapper[4886]: I0219 21:02:16.407779 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/313779c6-0e3d-494c-a7d1-53f1715657af-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "313779c6-0e3d-494c-a7d1-53f1715657af" (UID: "313779c6-0e3d-494c-a7d1-53f1715657af"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:02:16 crc kubenswrapper[4886]: I0219 21:02:16.407974 4886 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/313779c6-0e3d-494c-a7d1-53f1715657af-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 21:02:16 crc kubenswrapper[4886]: I0219 21:02:16.424545 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/313779c6-0e3d-494c-a7d1-53f1715657af-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "313779c6-0e3d-494c-a7d1-53f1715657af" (UID: "313779c6-0e3d-494c-a7d1-53f1715657af"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:02:16 crc kubenswrapper[4886]: I0219 21:02:16.509664 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/313779c6-0e3d-494c-a7d1-53f1715657af-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 21:02:16 crc kubenswrapper[4886]: I0219 21:02:16.611108 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs\") pod \"network-metrics-daemon-6hp27\" (UID: \"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\") " pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:02:16 crc kubenswrapper[4886]: I0219 21:02:16.612056 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"313779c6-0e3d-494c-a7d1-53f1715657af","Type":"ContainerDied","Data":"a383f186a94b216d11dcfbe62bdaa85b9c9256aa5cd21fe4d5dde9cab5f0f220"} Feb 19 21:02:16 crc kubenswrapper[4886]: I0219 21:02:16.612110 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a383f186a94b216d11dcfbe62bdaa85b9c9256aa5cd21fe4d5dde9cab5f0f220" Feb 19 21:02:16 crc kubenswrapper[4886]: I0219 21:02:16.612149 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 21:02:16 crc kubenswrapper[4886]: I0219 21:02:16.617937 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1160fb8a-b59d-4b7b-8632-d2b2ead9bb36-metrics-certs\") pod \"network-metrics-daemon-6hp27\" (UID: \"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36\") " pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:02:16 crc kubenswrapper[4886]: I0219 21:02:16.723785 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6hp27" Feb 19 21:02:17 crc kubenswrapper[4886]: I0219 21:02:17.144124 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:02:18 crc kubenswrapper[4886]: I0219 21:02:18.324513 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:02:18 crc kubenswrapper[4886]: I0219 21:02:18.324603 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:02:29 crc kubenswrapper[4886]: I0219 21:02:29.703324 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" Feb 19 21:02:30 crc kubenswrapper[4886]: E0219 21:02:30.062731 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 19 21:02:30 crc kubenswrapper[4886]: E0219 21:02:30.062962 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p885k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-tdbkd_openshift-marketplace(86bcf599-6058-4716-a144-d2dcb927c498): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 21:02:30 crc kubenswrapper[4886]: E0219 21:02:30.064570 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-tdbkd" podUID="86bcf599-6058-4716-a144-d2dcb927c498" Feb 19 21:02:30 crc kubenswrapper[4886]: E0219 21:02:30.166378 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 19 21:02:30 crc kubenswrapper[4886]: E0219 21:02:30.166516 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xkxk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-chw2d_openshift-marketplace(13570438-2f8f-4ad0-8320-a6fa737bf816): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 21:02:30 crc kubenswrapper[4886]: E0219 21:02:30.167682 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-chw2d" podUID="13570438-2f8f-4ad0-8320-a6fa737bf816" Feb 19 21:02:30 crc kubenswrapper[4886]: E0219 21:02:30.179388 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 19 21:02:30 crc kubenswrapper[4886]: E0219 21:02:30.179500 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-26thw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4xpm6_openshift-marketplace(123f03da-c3b1-49dd-aec6-8fd547885851): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 21:02:30 crc kubenswrapper[4886]: E0219 21:02:30.180623 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-4xpm6" podUID="123f03da-c3b1-49dd-aec6-8fd547885851" Feb 19 21:02:30 crc kubenswrapper[4886]: I0219 21:02:30.507245 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6hp27"] Feb 19 21:02:30 crc kubenswrapper[4886]: I0219 21:02:30.691472 4886 generic.go:334] "Generic (PLEG): container finished" podID="ca71404c-eec8-471e-a7ab-89f4ee69b025" containerID="8007d7f3b6d6ae53e2ce58ef91ab7b715a791c3307f0820549748705ab735649" exitCode=0 Feb 19 21:02:30 crc kubenswrapper[4886]: I0219 21:02:30.692876 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c89wj" event={"ID":"ca71404c-eec8-471e-a7ab-89f4ee69b025","Type":"ContainerDied","Data":"8007d7f3b6d6ae53e2ce58ef91ab7b715a791c3307f0820549748705ab735649"} Feb 19 21:02:30 crc kubenswrapper[4886]: I0219 21:02:30.694733 4886 generic.go:334] "Generic (PLEG): container finished" podID="7edb58ac-44cc-4e68-a1f0-3975d303b485" containerID="bd4306bccae22f53d40de51322ba8ddd833ed10287cb8970bf7d8d8e7f7e882a" exitCode=0 Feb 19 21:02:30 crc kubenswrapper[4886]: I0219 21:02:30.694860 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h6xg" event={"ID":"7edb58ac-44cc-4e68-a1f0-3975d303b485","Type":"ContainerDied","Data":"bd4306bccae22f53d40de51322ba8ddd833ed10287cb8970bf7d8d8e7f7e882a"} Feb 19 21:02:30 crc kubenswrapper[4886]: I0219 21:02:30.698294 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgcgg" event={"ID":"eafab696-8d58-4612-97af-abb4fea7dd97","Type":"ContainerStarted","Data":"53c501125ee63bb72a63e31e6f410403463b1da758a0f10fa04bc1be9994cee6"} Feb 19 21:02:30 crc kubenswrapper[4886]: I0219 21:02:30.699414 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6hp27" event={"ID":"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36","Type":"ContainerStarted","Data":"e9cd0d512ae619203ea6b036aaf73421212d5d00cd07acb536d9225a40a33904"} Feb 19 21:02:30 crc kubenswrapper[4886]: I0219 21:02:30.704015 4886 generic.go:334] "Generic (PLEG): container finished" podID="1b53c853-896f-4711-a16d-5b7981cec96d" containerID="746fde0af66ca0dee24a30af6cfafedc33545f12e06b20bf61ae3133129e9a08" exitCode=0 Feb 19 21:02:30 crc kubenswrapper[4886]: I0219 21:02:30.704144 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rzcsb" event={"ID":"1b53c853-896f-4711-a16d-5b7981cec96d","Type":"ContainerDied","Data":"746fde0af66ca0dee24a30af6cfafedc33545f12e06b20bf61ae3133129e9a08"} Feb 19 21:02:30 crc kubenswrapper[4886]: I0219 21:02:30.707062 4886 generic.go:334] "Generic (PLEG): container finished" podID="ffc2e69b-08e3-4d9d-b21d-755914032707" containerID="84e114b6702a29d37057bb3f083695d32ca7c5c942fcbd3bd8897a9a2aa94371" exitCode=0 Feb 19 21:02:30 crc kubenswrapper[4886]: I0219 21:02:30.709963 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7qh8" event={"ID":"ffc2e69b-08e3-4d9d-b21d-755914032707","Type":"ContainerDied","Data":"84e114b6702a29d37057bb3f083695d32ca7c5c942fcbd3bd8897a9a2aa94371"} Feb 19 21:02:30 crc kubenswrapper[4886]: E0219 21:02:30.719876 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-chw2d" podUID="13570438-2f8f-4ad0-8320-a6fa737bf816" Feb 19 21:02:30 crc kubenswrapper[4886]: E0219 21:02:30.719893 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4xpm6" podUID="123f03da-c3b1-49dd-aec6-8fd547885851" Feb 19 21:02:30 crc kubenswrapper[4886]: E0219 21:02:30.722765 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-tdbkd" podUID="86bcf599-6058-4716-a144-d2dcb927c498" Feb 19 21:02:31 crc kubenswrapper[4886]: I0219 21:02:31.724472 4886 generic.go:334] "Generic (PLEG): container finished" podID="eafab696-8d58-4612-97af-abb4fea7dd97" containerID="53c501125ee63bb72a63e31e6f410403463b1da758a0f10fa04bc1be9994cee6" exitCode=0 Feb 19 21:02:31 crc kubenswrapper[4886]: I0219 21:02:31.724524 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgcgg" event={"ID":"eafab696-8d58-4612-97af-abb4fea7dd97","Type":"ContainerDied","Data":"53c501125ee63bb72a63e31e6f410403463b1da758a0f10fa04bc1be9994cee6"} Feb 19 21:02:31 crc kubenswrapper[4886]: I0219 21:02:31.727715 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6hp27" event={"ID":"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36","Type":"ContainerStarted","Data":"c5c418fa09c916d77428e96a3023ecaebcf35ec610a4de7bda7f0145d74805f8"} Feb 19 21:02:31 crc kubenswrapper[4886]: I0219 21:02:31.727747 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6hp27" event={"ID":"1160fb8a-b59d-4b7b-8632-d2b2ead9bb36","Type":"ContainerStarted","Data":"707ee0d69802fc3a000b2b882e5721fcd84b68764bccac50222971999aeb7184"} Feb 19 21:02:31 crc kubenswrapper[4886]: I0219 21:02:31.729867 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rzcsb" event={"ID":"1b53c853-896f-4711-a16d-5b7981cec96d","Type":"ContainerStarted","Data":"05d589ee93ce48a90d2054c0f77d738c40eede47824716cbb0054b57a55d8ec0"} Feb 19 21:02:31 crc kubenswrapper[4886]: I0219 21:02:31.732909 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7qh8" event={"ID":"ffc2e69b-08e3-4d9d-b21d-755914032707","Type":"ContainerStarted","Data":"c0ed80b21aa1e078ea6a9cc72e46dd9bda17c8ea3b8f629bff65bea64549c574"} Feb 19 21:02:31 crc kubenswrapper[4886]: I0219 21:02:31.735727 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c89wj" event={"ID":"ca71404c-eec8-471e-a7ab-89f4ee69b025","Type":"ContainerStarted","Data":"015a717a2f424398fe158e109985eb3c18e0fb2617b8ec7d48e023b069bff693"} Feb 19 21:02:31 crc kubenswrapper[4886]: I0219 21:02:31.770344 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k7qh8" podStartSLOduration=2.9604535050000003 podStartE2EDuration="34.770313675s" podCreationTimestamp="2026-02-19 21:01:57 +0000 UTC" firstStartedPulling="2026-02-19 21:01:59.331363203 +0000 UTC m=+149.959206253" lastFinishedPulling="2026-02-19 21:02:31.141223373 +0000 UTC m=+181.769066423" observedRunningTime="2026-02-19 21:02:31.76801972 +0000 UTC m=+182.395862770" watchObservedRunningTime="2026-02-19 21:02:31.770313675 +0000 UTC m=+182.398156755" Feb 19 21:02:31 crc kubenswrapper[4886]: I0219 21:02:31.788756 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rzcsb" podStartSLOduration=2.9616058819999997 podStartE2EDuration="36.788738117s" podCreationTimestamp="2026-02-19 21:01:55 +0000 UTC" firstStartedPulling="2026-02-19 21:01:57.264405476 +0000 UTC m=+147.892248526" lastFinishedPulling="2026-02-19 21:02:31.091537711 +0000 UTC m=+181.719380761" observedRunningTime="2026-02-19 21:02:31.784663419 +0000 UTC m=+182.412506469" watchObservedRunningTime="2026-02-19 21:02:31.788738117 +0000 UTC m=+182.416581177" Feb 19 21:02:31 crc kubenswrapper[4886]: I0219 21:02:31.804674 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c89wj" podStartSLOduration=2.883708714 podStartE2EDuration="36.804652709s" podCreationTimestamp="2026-02-19 21:01:55 +0000 UTC" firstStartedPulling="2026-02-19 21:01:57.274426657 +0000 UTC m=+147.902269737" lastFinishedPulling="2026-02-19 21:02:31.195370682 +0000 UTC m=+181.823213732" observedRunningTime="2026-02-19 21:02:31.800395367 +0000 UTC m=+182.428238417" watchObservedRunningTime="2026-02-19 21:02:31.804652709 +0000 UTC m=+182.432495769" Feb 19 21:02:31 crc kubenswrapper[4886]: I0219 21:02:31.812330 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-6hp27" podStartSLOduration=158.812310723 podStartE2EDuration="2m38.812310723s" podCreationTimestamp="2026-02-19 20:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:02:31.812257391 +0000 UTC m=+182.440100461" watchObservedRunningTime="2026-02-19 21:02:31.812310723 +0000 UTC m=+182.440153773" Feb 19 21:02:32 crc kubenswrapper[4886]: I0219 21:02:32.742599 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h6xg" event={"ID":"7edb58ac-44cc-4e68-a1f0-3975d303b485","Type":"ContainerStarted","Data":"82ab09adae9b21c2cf6bdd746a24cee957dd7be522074a5f494b888c8c85ba66"} Feb 19 21:02:32 crc kubenswrapper[4886]: I0219 21:02:32.745073 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgcgg" event={"ID":"eafab696-8d58-4612-97af-abb4fea7dd97","Type":"ContainerStarted","Data":"8afcdc4f6e4995f4b0875488b207d7499a0ea8e241f8e6c52a01349eb51f93c4"} Feb 19 21:02:32 crc kubenswrapper[4886]: I0219 21:02:32.763751 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9h6xg" podStartSLOduration=3.461451644 podStartE2EDuration="37.763734278s" podCreationTimestamp="2026-02-19 21:01:55 +0000 UTC" firstStartedPulling="2026-02-19 21:01:57.282722796 +0000 UTC m=+147.910565886" lastFinishedPulling="2026-02-19 21:02:31.58500542 +0000 UTC m=+182.212848520" observedRunningTime="2026-02-19 21:02:32.762634772 +0000 UTC m=+183.390477822" watchObservedRunningTime="2026-02-19 21:02:32.763734278 +0000 UTC m=+183.391577328" Feb 19 21:02:32 crc kubenswrapper[4886]: I0219 21:02:32.778695 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dgcgg" podStartSLOduration=3.009710238 podStartE2EDuration="34.778683237s" podCreationTimestamp="2026-02-19 21:01:58 +0000 UTC" firstStartedPulling="2026-02-19 21:02:00.373465754 +0000 UTC m=+151.001308804" lastFinishedPulling="2026-02-19 21:02:32.142438753 +0000 UTC m=+182.770281803" observedRunningTime="2026-02-19 21:02:32.778456601 +0000 UTC m=+183.406299651" watchObservedRunningTime="2026-02-19 21:02:32.778683237 +0000 UTC m=+183.406526287" Feb 19 21:02:35 crc kubenswrapper[4886]: I0219 21:02:35.705770 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c89wj" Feb 19 21:02:35 crc kubenswrapper[4886]: I0219 21:02:35.706419 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c89wj" Feb 19 21:02:35 crc kubenswrapper[4886]: I0219 21:02:35.874109 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c89wj" Feb 19 21:02:36 crc kubenswrapper[4886]: I0219 21:02:36.096846 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9h6xg" Feb 19 21:02:36 crc kubenswrapper[4886]: I0219 21:02:36.096885 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9h6xg" Feb 19 21:02:36 crc kubenswrapper[4886]: I0219 21:02:36.153208 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9h6xg" Feb 19 21:02:36 crc kubenswrapper[4886]: I0219 21:02:36.332660 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rzcsb" Feb 19 21:02:36 crc kubenswrapper[4886]: I0219 21:02:36.332718 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rzcsb" Feb 19 21:02:36 crc kubenswrapper[4886]: I0219 21:02:36.396243 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rzcsb" Feb 19 21:02:36 crc kubenswrapper[4886]: I0219 21:02:36.809703 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9h6xg" Feb 19 21:02:36 crc kubenswrapper[4886]: I0219 21:02:36.812174 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rzcsb" Feb 19 21:02:37 crc kubenswrapper[4886]: I0219 21:02:37.402366 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 21:02:37 crc kubenswrapper[4886]: E0219 21:02:37.402775 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="313779c6-0e3d-494c-a7d1-53f1715657af" containerName="pruner" Feb 19 21:02:37 crc kubenswrapper[4886]: I0219 21:02:37.402789 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="313779c6-0e3d-494c-a7d1-53f1715657af" containerName="pruner" Feb 19 21:02:37 crc kubenswrapper[4886]: I0219 21:02:37.402896 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="313779c6-0e3d-494c-a7d1-53f1715657af" containerName="pruner" Feb 19 21:02:37 crc kubenswrapper[4886]: I0219 21:02:37.403286 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 21:02:37 crc kubenswrapper[4886]: I0219 21:02:37.405667 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 21:02:37 crc kubenswrapper[4886]: I0219 21:02:37.405669 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 21:02:37 crc kubenswrapper[4886]: I0219 21:02:37.417004 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 21:02:37 crc kubenswrapper[4886]: I0219 21:02:37.431124 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5f4xd"] Feb 19 21:02:37 crc kubenswrapper[4886]: I0219 21:02:37.500748 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6f3dbc6-bbd4-4984-8c57-3e2795189e5b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d6f3dbc6-bbd4-4984-8c57-3e2795189e5b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 21:02:37 crc kubenswrapper[4886]: I0219 21:02:37.500793 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6f3dbc6-bbd4-4984-8c57-3e2795189e5b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d6f3dbc6-bbd4-4984-8c57-3e2795189e5b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 21:02:37 crc kubenswrapper[4886]: I0219 21:02:37.601859 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6f3dbc6-bbd4-4984-8c57-3e2795189e5b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d6f3dbc6-bbd4-4984-8c57-3e2795189e5b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 21:02:37 crc kubenswrapper[4886]: I0219 21:02:37.601895 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6f3dbc6-bbd4-4984-8c57-3e2795189e5b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d6f3dbc6-bbd4-4984-8c57-3e2795189e5b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 21:02:37 crc kubenswrapper[4886]: I0219 21:02:37.601956 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6f3dbc6-bbd4-4984-8c57-3e2795189e5b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d6f3dbc6-bbd4-4984-8c57-3e2795189e5b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 21:02:37 crc kubenswrapper[4886]: I0219 21:02:37.625105 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6f3dbc6-bbd4-4984-8c57-3e2795189e5b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d6f3dbc6-bbd4-4984-8c57-3e2795189e5b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 21:02:37 crc kubenswrapper[4886]: I0219 21:02:37.726419 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 21:02:37 crc kubenswrapper[4886]: I0219 21:02:37.900610 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k7qh8" Feb 19 21:02:37 crc kubenswrapper[4886]: I0219 21:02:37.900969 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k7qh8" Feb 19 21:02:37 crc kubenswrapper[4886]: I0219 21:02:37.940933 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k7qh8" Feb 19 21:02:38 crc kubenswrapper[4886]: I0219 21:02:38.131729 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 21:02:38 crc kubenswrapper[4886]: I0219 21:02:38.781725 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d6f3dbc6-bbd4-4984-8c57-3e2795189e5b","Type":"ContainerStarted","Data":"8fc97e42917fdbee604b9dbb8422aeb577ecd87e56c270c126d366e9d5b10026"} Feb 19 21:02:38 crc kubenswrapper[4886]: I0219 21:02:38.781792 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d6f3dbc6-bbd4-4984-8c57-3e2795189e5b","Type":"ContainerStarted","Data":"92dff0cc1ad2c684cf2d5198a9ddd54c74c5e188714badb84cf4438269219cb9"} Feb 19 21:02:38 crc kubenswrapper[4886]: I0219 21:02:38.797571 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.7975496309999999 podStartE2EDuration="1.797549631s" podCreationTimestamp="2026-02-19 21:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:02:38.795252775 +0000 UTC m=+189.423095865" watchObservedRunningTime="2026-02-19 21:02:38.797549631 +0000 UTC m=+189.425392691" Feb 19 21:02:38 crc kubenswrapper[4886]: I0219 21:02:38.836734 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k7qh8" Feb 19 21:02:38 crc kubenswrapper[4886]: I0219 21:02:38.930772 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9h6xg"] Feb 19 21:02:38 crc kubenswrapper[4886]: I0219 21:02:38.930998 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9h6xg" podUID="7edb58ac-44cc-4e68-a1f0-3975d303b485" containerName="registry-server" containerID="cri-o://82ab09adae9b21c2cf6bdd746a24cee957dd7be522074a5f494b888c8c85ba66" gracePeriod=2 Feb 19 21:02:38 crc kubenswrapper[4886]: I0219 21:02:38.969776 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dgcgg" Feb 19 21:02:38 crc kubenswrapper[4886]: I0219 21:02:38.969823 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dgcgg" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.015253 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dgcgg" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.120143 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rzcsb"] Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.120659 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rzcsb" podUID="1b53c853-896f-4711-a16d-5b7981cec96d" containerName="registry-server" containerID="cri-o://05d589ee93ce48a90d2054c0f77d738c40eede47824716cbb0054b57a55d8ec0" gracePeriod=2 Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.417578 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h6xg" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.530836 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7edb58ac-44cc-4e68-a1f0-3975d303b485-utilities\") pod \"7edb58ac-44cc-4e68-a1f0-3975d303b485\" (UID: \"7edb58ac-44cc-4e68-a1f0-3975d303b485\") " Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.530932 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7edb58ac-44cc-4e68-a1f0-3975d303b485-catalog-content\") pod \"7edb58ac-44cc-4e68-a1f0-3975d303b485\" (UID: \"7edb58ac-44cc-4e68-a1f0-3975d303b485\") " Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.530956 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d8kf\" (UniqueName: \"kubernetes.io/projected/7edb58ac-44cc-4e68-a1f0-3975d303b485-kube-api-access-9d8kf\") pod \"7edb58ac-44cc-4e68-a1f0-3975d303b485\" (UID: \"7edb58ac-44cc-4e68-a1f0-3975d303b485\") " Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.532368 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rzcsb" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.535565 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7edb58ac-44cc-4e68-a1f0-3975d303b485-utilities" (OuterVolumeSpecName: "utilities") pod "7edb58ac-44cc-4e68-a1f0-3975d303b485" (UID: "7edb58ac-44cc-4e68-a1f0-3975d303b485"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.536594 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7edb58ac-44cc-4e68-a1f0-3975d303b485-kube-api-access-9d8kf" (OuterVolumeSpecName: "kube-api-access-9d8kf") pod "7edb58ac-44cc-4e68-a1f0-3975d303b485" (UID: "7edb58ac-44cc-4e68-a1f0-3975d303b485"). InnerVolumeSpecName "kube-api-access-9d8kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.606128 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7edb58ac-44cc-4e68-a1f0-3975d303b485-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7edb58ac-44cc-4e68-a1f0-3975d303b485" (UID: "7edb58ac-44cc-4e68-a1f0-3975d303b485"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.632520 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7edb58ac-44cc-4e68-a1f0-3975d303b485-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.632563 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7edb58ac-44cc-4e68-a1f0-3975d303b485-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.632577 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d8kf\" (UniqueName: \"kubernetes.io/projected/7edb58ac-44cc-4e68-a1f0-3975d303b485-kube-api-access-9d8kf\") on node \"crc\" DevicePath \"\"" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.733574 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b53c853-896f-4711-a16d-5b7981cec96d-catalog-content\") pod \"1b53c853-896f-4711-a16d-5b7981cec96d\" (UID: \"1b53c853-896f-4711-a16d-5b7981cec96d\") " Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.733669 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b53c853-896f-4711-a16d-5b7981cec96d-utilities\") pod \"1b53c853-896f-4711-a16d-5b7981cec96d\" (UID: \"1b53c853-896f-4711-a16d-5b7981cec96d\") " Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.733744 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cslvp\" (UniqueName: \"kubernetes.io/projected/1b53c853-896f-4711-a16d-5b7981cec96d-kube-api-access-cslvp\") pod \"1b53c853-896f-4711-a16d-5b7981cec96d\" (UID: \"1b53c853-896f-4711-a16d-5b7981cec96d\") " Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.734640 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b53c853-896f-4711-a16d-5b7981cec96d-utilities" (OuterVolumeSpecName: "utilities") pod "1b53c853-896f-4711-a16d-5b7981cec96d" (UID: "1b53c853-896f-4711-a16d-5b7981cec96d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.738704 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b53c853-896f-4711-a16d-5b7981cec96d-kube-api-access-cslvp" (OuterVolumeSpecName: "kube-api-access-cslvp") pod "1b53c853-896f-4711-a16d-5b7981cec96d" (UID: "1b53c853-896f-4711-a16d-5b7981cec96d"). InnerVolumeSpecName "kube-api-access-cslvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.783706 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b53c853-896f-4711-a16d-5b7981cec96d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b53c853-896f-4711-a16d-5b7981cec96d" (UID: "1b53c853-896f-4711-a16d-5b7981cec96d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.789350 4886 generic.go:334] "Generic (PLEG): container finished" podID="d6f3dbc6-bbd4-4984-8c57-3e2795189e5b" containerID="8fc97e42917fdbee604b9dbb8422aeb577ecd87e56c270c126d366e9d5b10026" exitCode=0 Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.789455 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d6f3dbc6-bbd4-4984-8c57-3e2795189e5b","Type":"ContainerDied","Data":"8fc97e42917fdbee604b9dbb8422aeb577ecd87e56c270c126d366e9d5b10026"} Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.793560 4886 generic.go:334] "Generic (PLEG): container finished" podID="7edb58ac-44cc-4e68-a1f0-3975d303b485" containerID="82ab09adae9b21c2cf6bdd746a24cee957dd7be522074a5f494b888c8c85ba66" exitCode=0 Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.793638 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h6xg" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.793652 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h6xg" event={"ID":"7edb58ac-44cc-4e68-a1f0-3975d303b485","Type":"ContainerDied","Data":"82ab09adae9b21c2cf6bdd746a24cee957dd7be522074a5f494b888c8c85ba66"} Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.793692 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h6xg" event={"ID":"7edb58ac-44cc-4e68-a1f0-3975d303b485","Type":"ContainerDied","Data":"81ba2bec114bffaead02339a24e796ede4372b3fd072cd7e1d4db826d02b1048"} Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.793790 4886 scope.go:117] "RemoveContainer" containerID="82ab09adae9b21c2cf6bdd746a24cee957dd7be522074a5f494b888c8c85ba66" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.798111 4886 generic.go:334] "Generic (PLEG): container finished" podID="1b53c853-896f-4711-a16d-5b7981cec96d" containerID="05d589ee93ce48a90d2054c0f77d738c40eede47824716cbb0054b57a55d8ec0" exitCode=0 Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.798171 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rzcsb" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.798217 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rzcsb" event={"ID":"1b53c853-896f-4711-a16d-5b7981cec96d","Type":"ContainerDied","Data":"05d589ee93ce48a90d2054c0f77d738c40eede47824716cbb0054b57a55d8ec0"} Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.798314 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rzcsb" event={"ID":"1b53c853-896f-4711-a16d-5b7981cec96d","Type":"ContainerDied","Data":"09e07d563fa5b9ccf62777bd108ced5e2aaffafe42ad8d7e2c88b9d96dac610b"} Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.829664 4886 scope.go:117] "RemoveContainer" containerID="bd4306bccae22f53d40de51322ba8ddd833ed10287cb8970bf7d8d8e7f7e882a" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.831776 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9h6xg"] Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.835784 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b53c853-896f-4711-a16d-5b7981cec96d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.835818 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b53c853-896f-4711-a16d-5b7981cec96d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.835831 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cslvp\" (UniqueName: \"kubernetes.io/projected/1b53c853-896f-4711-a16d-5b7981cec96d-kube-api-access-cslvp\") on node \"crc\" DevicePath \"\"" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.837481 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9h6xg"] Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.845413 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dgcgg" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.848391 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rzcsb"] Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.853566 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rzcsb"] Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.856794 4886 scope.go:117] "RemoveContainer" containerID="01fb484fdb05ba446696f30090f7ceef6856a6d7708e2fc0ba777eaa566044b7" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.874039 4886 scope.go:117] "RemoveContainer" containerID="82ab09adae9b21c2cf6bdd746a24cee957dd7be522074a5f494b888c8c85ba66" Feb 19 21:02:39 crc kubenswrapper[4886]: E0219 21:02:39.874473 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82ab09adae9b21c2cf6bdd746a24cee957dd7be522074a5f494b888c8c85ba66\": container with ID starting with 82ab09adae9b21c2cf6bdd746a24cee957dd7be522074a5f494b888c8c85ba66 not found: ID does not exist" containerID="82ab09adae9b21c2cf6bdd746a24cee957dd7be522074a5f494b888c8c85ba66" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.874510 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82ab09adae9b21c2cf6bdd746a24cee957dd7be522074a5f494b888c8c85ba66"} err="failed to get container status \"82ab09adae9b21c2cf6bdd746a24cee957dd7be522074a5f494b888c8c85ba66\": rpc error: code = NotFound desc = could not find container \"82ab09adae9b21c2cf6bdd746a24cee957dd7be522074a5f494b888c8c85ba66\": container with ID starting with 82ab09adae9b21c2cf6bdd746a24cee957dd7be522074a5f494b888c8c85ba66 not found: ID does not exist" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.874554 4886 scope.go:117] "RemoveContainer" containerID="bd4306bccae22f53d40de51322ba8ddd833ed10287cb8970bf7d8d8e7f7e882a" Feb 19 21:02:39 crc kubenswrapper[4886]: E0219 21:02:39.874767 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd4306bccae22f53d40de51322ba8ddd833ed10287cb8970bf7d8d8e7f7e882a\": container with ID starting with bd4306bccae22f53d40de51322ba8ddd833ed10287cb8970bf7d8d8e7f7e882a not found: ID does not exist" containerID="bd4306bccae22f53d40de51322ba8ddd833ed10287cb8970bf7d8d8e7f7e882a" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.874788 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd4306bccae22f53d40de51322ba8ddd833ed10287cb8970bf7d8d8e7f7e882a"} err="failed to get container status \"bd4306bccae22f53d40de51322ba8ddd833ed10287cb8970bf7d8d8e7f7e882a\": rpc error: code = NotFound desc = could not find container \"bd4306bccae22f53d40de51322ba8ddd833ed10287cb8970bf7d8d8e7f7e882a\": container with ID starting with bd4306bccae22f53d40de51322ba8ddd833ed10287cb8970bf7d8d8e7f7e882a not found: ID does not exist" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.874802 4886 scope.go:117] "RemoveContainer" containerID="01fb484fdb05ba446696f30090f7ceef6856a6d7708e2fc0ba777eaa566044b7" Feb 19 21:02:39 crc kubenswrapper[4886]: E0219 21:02:39.875075 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01fb484fdb05ba446696f30090f7ceef6856a6d7708e2fc0ba777eaa566044b7\": container with ID starting with 01fb484fdb05ba446696f30090f7ceef6856a6d7708e2fc0ba777eaa566044b7 not found: ID does not exist" containerID="01fb484fdb05ba446696f30090f7ceef6856a6d7708e2fc0ba777eaa566044b7" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.875095 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01fb484fdb05ba446696f30090f7ceef6856a6d7708e2fc0ba777eaa566044b7"} err="failed to get container status \"01fb484fdb05ba446696f30090f7ceef6856a6d7708e2fc0ba777eaa566044b7\": rpc error: code = NotFound desc = could not find container \"01fb484fdb05ba446696f30090f7ceef6856a6d7708e2fc0ba777eaa566044b7\": container with ID starting with 01fb484fdb05ba446696f30090f7ceef6856a6d7708e2fc0ba777eaa566044b7 not found: ID does not exist" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.875109 4886 scope.go:117] "RemoveContainer" containerID="05d589ee93ce48a90d2054c0f77d738c40eede47824716cbb0054b57a55d8ec0" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.890951 4886 scope.go:117] "RemoveContainer" containerID="746fde0af66ca0dee24a30af6cfafedc33545f12e06b20bf61ae3133129e9a08" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.905526 4886 scope.go:117] "RemoveContainer" containerID="2dcd5dee7d119f33574b250ff4ebbf38a8445b0fae3872e2a2bf5b1c69e92a46" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.916765 4886 scope.go:117] "RemoveContainer" containerID="05d589ee93ce48a90d2054c0f77d738c40eede47824716cbb0054b57a55d8ec0" Feb 19 21:02:39 crc kubenswrapper[4886]: E0219 21:02:39.917141 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05d589ee93ce48a90d2054c0f77d738c40eede47824716cbb0054b57a55d8ec0\": container with ID starting with 05d589ee93ce48a90d2054c0f77d738c40eede47824716cbb0054b57a55d8ec0 not found: ID does not exist" containerID="05d589ee93ce48a90d2054c0f77d738c40eede47824716cbb0054b57a55d8ec0" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.917171 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05d589ee93ce48a90d2054c0f77d738c40eede47824716cbb0054b57a55d8ec0"} err="failed to get container status \"05d589ee93ce48a90d2054c0f77d738c40eede47824716cbb0054b57a55d8ec0\": rpc error: code = NotFound desc = could not find container \"05d589ee93ce48a90d2054c0f77d738c40eede47824716cbb0054b57a55d8ec0\": container with ID starting with 05d589ee93ce48a90d2054c0f77d738c40eede47824716cbb0054b57a55d8ec0 not found: ID does not exist" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.917194 4886 scope.go:117] "RemoveContainer" containerID="746fde0af66ca0dee24a30af6cfafedc33545f12e06b20bf61ae3133129e9a08" Feb 19 21:02:39 crc kubenswrapper[4886]: E0219 21:02:39.917470 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"746fde0af66ca0dee24a30af6cfafedc33545f12e06b20bf61ae3133129e9a08\": container with ID starting with 746fde0af66ca0dee24a30af6cfafedc33545f12e06b20bf61ae3133129e9a08 not found: ID does not exist" containerID="746fde0af66ca0dee24a30af6cfafedc33545f12e06b20bf61ae3133129e9a08" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.917490 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"746fde0af66ca0dee24a30af6cfafedc33545f12e06b20bf61ae3133129e9a08"} err="failed to get container status \"746fde0af66ca0dee24a30af6cfafedc33545f12e06b20bf61ae3133129e9a08\": rpc error: code = NotFound desc = could not find container \"746fde0af66ca0dee24a30af6cfafedc33545f12e06b20bf61ae3133129e9a08\": container with ID starting with 746fde0af66ca0dee24a30af6cfafedc33545f12e06b20bf61ae3133129e9a08 not found: ID does not exist" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.917503 4886 scope.go:117] "RemoveContainer" containerID="2dcd5dee7d119f33574b250ff4ebbf38a8445b0fae3872e2a2bf5b1c69e92a46" Feb 19 21:02:39 crc kubenswrapper[4886]: E0219 21:02:39.917834 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dcd5dee7d119f33574b250ff4ebbf38a8445b0fae3872e2a2bf5b1c69e92a46\": container with ID starting with 2dcd5dee7d119f33574b250ff4ebbf38a8445b0fae3872e2a2bf5b1c69e92a46 not found: ID does not exist" containerID="2dcd5dee7d119f33574b250ff4ebbf38a8445b0fae3872e2a2bf5b1c69e92a46" Feb 19 21:02:39 crc kubenswrapper[4886]: I0219 21:02:39.917854 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dcd5dee7d119f33574b250ff4ebbf38a8445b0fae3872e2a2bf5b1c69e92a46"} err="failed to get container status \"2dcd5dee7d119f33574b250ff4ebbf38a8445b0fae3872e2a2bf5b1c69e92a46\": rpc error: code = NotFound desc = could not find container \"2dcd5dee7d119f33574b250ff4ebbf38a8445b0fae3872e2a2bf5b1c69e92a46\": container with ID starting with 2dcd5dee7d119f33574b250ff4ebbf38a8445b0fae3872e2a2bf5b1c69e92a46 not found: ID does not exist" Feb 19 21:02:40 crc kubenswrapper[4886]: I0219 21:02:40.614867 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b53c853-896f-4711-a16d-5b7981cec96d" path="/var/lib/kubelet/pods/1b53c853-896f-4711-a16d-5b7981cec96d/volumes" Feb 19 21:02:40 crc kubenswrapper[4886]: I0219 21:02:40.615674 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7edb58ac-44cc-4e68-a1f0-3975d303b485" path="/var/lib/kubelet/pods/7edb58ac-44cc-4e68-a1f0-3975d303b485/volumes" Feb 19 21:02:40 crc kubenswrapper[4886]: I0219 21:02:40.756179 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 21:02:41 crc kubenswrapper[4886]: I0219 21:02:41.086418 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 21:02:41 crc kubenswrapper[4886]: I0219 21:02:41.250238 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6f3dbc6-bbd4-4984-8c57-3e2795189e5b-kube-api-access\") pod \"d6f3dbc6-bbd4-4984-8c57-3e2795189e5b\" (UID: \"d6f3dbc6-bbd4-4984-8c57-3e2795189e5b\") " Feb 19 21:02:41 crc kubenswrapper[4886]: I0219 21:02:41.251343 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6f3dbc6-bbd4-4984-8c57-3e2795189e5b-kubelet-dir\") pod \"d6f3dbc6-bbd4-4984-8c57-3e2795189e5b\" (UID: \"d6f3dbc6-bbd4-4984-8c57-3e2795189e5b\") " Feb 19 21:02:41 crc kubenswrapper[4886]: I0219 21:02:41.251467 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6f3dbc6-bbd4-4984-8c57-3e2795189e5b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d6f3dbc6-bbd4-4984-8c57-3e2795189e5b" (UID: "d6f3dbc6-bbd4-4984-8c57-3e2795189e5b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:02:41 crc kubenswrapper[4886]: I0219 21:02:41.251723 4886 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6f3dbc6-bbd4-4984-8c57-3e2795189e5b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 21:02:41 crc kubenswrapper[4886]: I0219 21:02:41.254030 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6f3dbc6-bbd4-4984-8c57-3e2795189e5b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d6f3dbc6-bbd4-4984-8c57-3e2795189e5b" (UID: "d6f3dbc6-bbd4-4984-8c57-3e2795189e5b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:02:41 crc kubenswrapper[4886]: I0219 21:02:41.352496 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6f3dbc6-bbd4-4984-8c57-3e2795189e5b-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 21:02:41 crc kubenswrapper[4886]: I0219 21:02:41.813525 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d6f3dbc6-bbd4-4984-8c57-3e2795189e5b","Type":"ContainerDied","Data":"92dff0cc1ad2c684cf2d5198a9ddd54c74c5e188714badb84cf4438269219cb9"} Feb 19 21:02:41 crc kubenswrapper[4886]: I0219 21:02:41.813580 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 21:02:41 crc kubenswrapper[4886]: I0219 21:02:41.813594 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92dff0cc1ad2c684cf2d5198a9ddd54c74c5e188714badb84cf4438269219cb9" Feb 19 21:02:42 crc kubenswrapper[4886]: I0219 21:02:42.602614 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 21:02:42 crc kubenswrapper[4886]: E0219 21:02:42.602859 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7edb58ac-44cc-4e68-a1f0-3975d303b485" containerName="extract-content" Feb 19 21:02:42 crc kubenswrapper[4886]: I0219 21:02:42.602874 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="7edb58ac-44cc-4e68-a1f0-3975d303b485" containerName="extract-content" Feb 19 21:02:42 crc kubenswrapper[4886]: E0219 21:02:42.602888 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7edb58ac-44cc-4e68-a1f0-3975d303b485" containerName="registry-server" Feb 19 21:02:42 crc kubenswrapper[4886]: I0219 21:02:42.602896 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="7edb58ac-44cc-4e68-a1f0-3975d303b485" containerName="registry-server" Feb 19 21:02:42 crc kubenswrapper[4886]: E0219 21:02:42.602908 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b53c853-896f-4711-a16d-5b7981cec96d" containerName="registry-server" Feb 19 21:02:42 crc kubenswrapper[4886]: I0219 21:02:42.602920 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b53c853-896f-4711-a16d-5b7981cec96d" containerName="registry-server" Feb 19 21:02:42 crc kubenswrapper[4886]: E0219 21:02:42.602935 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6f3dbc6-bbd4-4984-8c57-3e2795189e5b" containerName="pruner" Feb 19 21:02:42 crc kubenswrapper[4886]: I0219 21:02:42.602943 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6f3dbc6-bbd4-4984-8c57-3e2795189e5b" containerName="pruner" Feb 19 21:02:42 crc kubenswrapper[4886]: E0219 21:02:42.602956 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7edb58ac-44cc-4e68-a1f0-3975d303b485" containerName="extract-utilities" Feb 19 21:02:42 crc kubenswrapper[4886]: I0219 21:02:42.602964 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="7edb58ac-44cc-4e68-a1f0-3975d303b485" containerName="extract-utilities" Feb 19 21:02:42 crc kubenswrapper[4886]: E0219 21:02:42.602977 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b53c853-896f-4711-a16d-5b7981cec96d" containerName="extract-content" Feb 19 21:02:42 crc kubenswrapper[4886]: I0219 21:02:42.602985 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b53c853-896f-4711-a16d-5b7981cec96d" containerName="extract-content" Feb 19 21:02:42 crc kubenswrapper[4886]: E0219 21:02:42.602998 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b53c853-896f-4711-a16d-5b7981cec96d" containerName="extract-utilities" Feb 19 21:02:42 crc kubenswrapper[4886]: I0219 21:02:42.603008 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b53c853-896f-4711-a16d-5b7981cec96d" containerName="extract-utilities" Feb 19 21:02:42 crc kubenswrapper[4886]: I0219 21:02:42.603138 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b53c853-896f-4711-a16d-5b7981cec96d" containerName="registry-server" Feb 19 21:02:42 crc kubenswrapper[4886]: I0219 21:02:42.603153 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="7edb58ac-44cc-4e68-a1f0-3975d303b485" containerName="registry-server" Feb 19 21:02:42 crc kubenswrapper[4886]: I0219 21:02:42.603164 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6f3dbc6-bbd4-4984-8c57-3e2795189e5b" containerName="pruner" Feb 19 21:02:42 crc kubenswrapper[4886]: I0219 21:02:42.603827 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 21:02:42 crc kubenswrapper[4886]: I0219 21:02:42.606549 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 21:02:42 crc kubenswrapper[4886]: I0219 21:02:42.606764 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 21:02:42 crc kubenswrapper[4886]: I0219 21:02:42.621181 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 21:02:42 crc kubenswrapper[4886]: I0219 21:02:42.770850 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3afcbe4-42fa-4910-9365-a0b6891fa49b-kube-api-access\") pod \"installer-9-crc\" (UID: \"a3afcbe4-42fa-4910-9365-a0b6891fa49b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 21:02:42 crc kubenswrapper[4886]: I0219 21:02:42.770945 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3afcbe4-42fa-4910-9365-a0b6891fa49b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a3afcbe4-42fa-4910-9365-a0b6891fa49b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 21:02:42 crc kubenswrapper[4886]: I0219 21:02:42.770976 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3afcbe4-42fa-4910-9365-a0b6891fa49b-var-lock\") pod \"installer-9-crc\" (UID: \"a3afcbe4-42fa-4910-9365-a0b6891fa49b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 21:02:42 crc kubenswrapper[4886]: I0219 21:02:42.872379 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3afcbe4-42fa-4910-9365-a0b6891fa49b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a3afcbe4-42fa-4910-9365-a0b6891fa49b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 21:02:42 crc kubenswrapper[4886]: I0219 21:02:42.872446 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3afcbe4-42fa-4910-9365-a0b6891fa49b-var-lock\") pod \"installer-9-crc\" (UID: \"a3afcbe4-42fa-4910-9365-a0b6891fa49b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 21:02:42 crc kubenswrapper[4886]: I0219 21:02:42.872474 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3afcbe4-42fa-4910-9365-a0b6891fa49b-var-lock\") pod \"installer-9-crc\" (UID: \"a3afcbe4-42fa-4910-9365-a0b6891fa49b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 21:02:42 crc kubenswrapper[4886]: I0219 21:02:42.872445 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3afcbe4-42fa-4910-9365-a0b6891fa49b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a3afcbe4-42fa-4910-9365-a0b6891fa49b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 21:02:42 crc kubenswrapper[4886]: I0219 21:02:42.872480 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3afcbe4-42fa-4910-9365-a0b6891fa49b-kube-api-access\") pod \"installer-9-crc\" (UID: \"a3afcbe4-42fa-4910-9365-a0b6891fa49b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 21:02:42 crc kubenswrapper[4886]: I0219 21:02:42.890306 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3afcbe4-42fa-4910-9365-a0b6891fa49b-kube-api-access\") pod \"installer-9-crc\" (UID: \"a3afcbe4-42fa-4910-9365-a0b6891fa49b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 21:02:42 crc kubenswrapper[4886]: I0219 21:02:42.968032 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 21:02:43 crc kubenswrapper[4886]: I0219 21:02:43.406679 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 21:02:43 crc kubenswrapper[4886]: W0219 21:02:43.413019 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda3afcbe4_42fa_4910_9365_a0b6891fa49b.slice/crio-edf8174b39d785d4f2d346c85e3c239929cc8879c4909855d990ce49d0b75600 WatchSource:0}: Error finding container edf8174b39d785d4f2d346c85e3c239929cc8879c4909855d990ce49d0b75600: Status 404 returned error can't find the container with id edf8174b39d785d4f2d346c85e3c239929cc8879c4909855d990ce49d0b75600 Feb 19 21:02:43 crc kubenswrapper[4886]: I0219 21:02:43.824199 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a3afcbe4-42fa-4910-9365-a0b6891fa49b","Type":"ContainerStarted","Data":"775f7691ab17676a28da2ff9fb92cada92dfa841d7dd3bbdcd2928478a1d68ae"} Feb 19 21:02:43 crc kubenswrapper[4886]: I0219 21:02:43.824492 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a3afcbe4-42fa-4910-9365-a0b6891fa49b","Type":"ContainerStarted","Data":"edf8174b39d785d4f2d346c85e3c239929cc8879c4909855d990ce49d0b75600"} Feb 19 21:02:44 crc kubenswrapper[4886]: I0219 21:02:44.832276 4886 generic.go:334] "Generic (PLEG): container finished" podID="13570438-2f8f-4ad0-8320-a6fa737bf816" containerID="fbebed6f7fff4a2e04b860e7feb1b31070961ff83cab5bb5721f93ab3234afb9" exitCode=0 Feb 19 21:02:44 crc kubenswrapper[4886]: I0219 21:02:44.832305 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chw2d" event={"ID":"13570438-2f8f-4ad0-8320-a6fa737bf816","Type":"ContainerDied","Data":"fbebed6f7fff4a2e04b860e7feb1b31070961ff83cab5bb5721f93ab3234afb9"} Feb 19 21:02:44 crc kubenswrapper[4886]: I0219 21:02:44.836001 4886 generic.go:334] "Generic (PLEG): container finished" podID="86bcf599-6058-4716-a144-d2dcb927c498" containerID="ea01683c17a1bae32bea9e09a679252d0e0ccbc67aa11952e7011559b34078de" exitCode=0 Feb 19 21:02:44 crc kubenswrapper[4886]: I0219 21:02:44.836176 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tdbkd" event={"ID":"86bcf599-6058-4716-a144-d2dcb927c498","Type":"ContainerDied","Data":"ea01683c17a1bae32bea9e09a679252d0e0ccbc67aa11952e7011559b34078de"} Feb 19 21:02:44 crc kubenswrapper[4886]: I0219 21:02:44.865938 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.865923203 podStartE2EDuration="2.865923203s" podCreationTimestamp="2026-02-19 21:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:02:44.862511277 +0000 UTC m=+195.490354327" watchObservedRunningTime="2026-02-19 21:02:44.865923203 +0000 UTC m=+195.493766253" Feb 19 21:02:45 crc kubenswrapper[4886]: I0219 21:02:45.755027 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c89wj" Feb 19 21:02:45 crc kubenswrapper[4886]: I0219 21:02:45.843009 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tdbkd" event={"ID":"86bcf599-6058-4716-a144-d2dcb927c498","Type":"ContainerStarted","Data":"8b95aabaeb5e00bb79e06de8cb1b67f11f651417b988a63e2439a23d8db5cc23"} Feb 19 21:02:45 crc kubenswrapper[4886]: I0219 21:02:45.845518 4886 generic.go:334] "Generic (PLEG): container finished" podID="123f03da-c3b1-49dd-aec6-8fd547885851" containerID="d93d5a0f681c56cec1fe2ab84feb6cd600fc5099ed0fd1d8cf1ceb0c904a2e74" exitCode=0 Feb 19 21:02:45 crc kubenswrapper[4886]: I0219 21:02:45.845575 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xpm6" event={"ID":"123f03da-c3b1-49dd-aec6-8fd547885851","Type":"ContainerDied","Data":"d93d5a0f681c56cec1fe2ab84feb6cd600fc5099ed0fd1d8cf1ceb0c904a2e74"} Feb 19 21:02:45 crc kubenswrapper[4886]: I0219 21:02:45.848440 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chw2d" event={"ID":"13570438-2f8f-4ad0-8320-a6fa737bf816","Type":"ContainerStarted","Data":"e04313ba2ff43a116cfa51efea7977492e437edd8d7bfbe1cf4cbdeaa834bc91"} Feb 19 21:02:45 crc kubenswrapper[4886]: I0219 21:02:45.865683 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tdbkd" podStartSLOduration=2.8670444870000003 podStartE2EDuration="48.865667002s" podCreationTimestamp="2026-02-19 21:01:57 +0000 UTC" firstStartedPulling="2026-02-19 21:01:59.371370593 +0000 UTC m=+149.999213643" lastFinishedPulling="2026-02-19 21:02:45.369993068 +0000 UTC m=+195.997836158" observedRunningTime="2026-02-19 21:02:45.865473697 +0000 UTC m=+196.493316747" watchObservedRunningTime="2026-02-19 21:02:45.865667002 +0000 UTC m=+196.493510052" Feb 19 21:02:45 crc kubenswrapper[4886]: I0219 21:02:45.882841 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-chw2d" podStartSLOduration=3.004232525 podStartE2EDuration="47.882824104s" podCreationTimestamp="2026-02-19 21:01:58 +0000 UTC" firstStartedPulling="2026-02-19 21:02:00.369080548 +0000 UTC m=+150.996923598" lastFinishedPulling="2026-02-19 21:02:45.247672127 +0000 UTC m=+195.875515177" observedRunningTime="2026-02-19 21:02:45.882394783 +0000 UTC m=+196.510237833" watchObservedRunningTime="2026-02-19 21:02:45.882824104 +0000 UTC m=+196.510667154" Feb 19 21:02:46 crc kubenswrapper[4886]: I0219 21:02:46.855491 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xpm6" event={"ID":"123f03da-c3b1-49dd-aec6-8fd547885851","Type":"ContainerStarted","Data":"4f68b473f6331b2638c3a14acb4ecb616cd7e7a199a990a91605c63c8cc8fb73"} Feb 19 21:02:46 crc kubenswrapper[4886]: I0219 21:02:46.875672 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4xpm6" podStartSLOduration=2.879912281 podStartE2EDuration="51.87565345s" podCreationTimestamp="2026-02-19 21:01:55 +0000 UTC" firstStartedPulling="2026-02-19 21:01:57.271041126 +0000 UTC m=+147.898884206" lastFinishedPulling="2026-02-19 21:02:46.266782315 +0000 UTC m=+196.894625375" observedRunningTime="2026-02-19 21:02:46.87446499 +0000 UTC m=+197.502308040" watchObservedRunningTime="2026-02-19 21:02:46.87565345 +0000 UTC m=+197.503496500" Feb 19 21:02:48 crc kubenswrapper[4886]: I0219 21:02:48.304512 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tdbkd" Feb 19 21:02:48 crc kubenswrapper[4886]: I0219 21:02:48.304883 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tdbkd" Feb 19 21:02:48 crc kubenswrapper[4886]: I0219 21:02:48.324804 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:02:48 crc kubenswrapper[4886]: I0219 21:02:48.324876 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:02:48 crc kubenswrapper[4886]: I0219 21:02:48.343665 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tdbkd" Feb 19 21:02:49 crc kubenswrapper[4886]: I0219 21:02:49.328475 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-chw2d" Feb 19 21:02:49 crc kubenswrapper[4886]: I0219 21:02:49.328698 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-chw2d" Feb 19 21:02:50 crc kubenswrapper[4886]: I0219 21:02:50.387311 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-chw2d" podUID="13570438-2f8f-4ad0-8320-a6fa737bf816" containerName="registry-server" probeResult="failure" output=< Feb 19 21:02:50 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 21:02:50 crc kubenswrapper[4886]: > Feb 19 21:02:55 crc kubenswrapper[4886]: I0219 21:02:55.964426 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4xpm6" Feb 19 21:02:55 crc kubenswrapper[4886]: I0219 21:02:55.965242 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4xpm6" Feb 19 21:02:56 crc kubenswrapper[4886]: I0219 21:02:56.035794 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4xpm6" Feb 19 21:02:56 crc kubenswrapper[4886]: I0219 21:02:56.962571 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4xpm6" Feb 19 21:02:58 crc kubenswrapper[4886]: I0219 21:02:58.376892 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tdbkd" Feb 19 21:02:59 crc kubenswrapper[4886]: I0219 21:02:59.373718 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-chw2d" Feb 19 21:02:59 crc kubenswrapper[4886]: I0219 21:02:59.440879 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-chw2d" Feb 19 21:02:59 crc kubenswrapper[4886]: I0219 21:02:59.526817 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tdbkd"] Feb 19 21:02:59 crc kubenswrapper[4886]: I0219 21:02:59.527039 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tdbkd" podUID="86bcf599-6058-4716-a144-d2dcb927c498" containerName="registry-server" containerID="cri-o://8b95aabaeb5e00bb79e06de8cb1b67f11f651417b988a63e2439a23d8db5cc23" gracePeriod=2 Feb 19 21:02:59 crc kubenswrapper[4886]: I0219 21:02:59.858542 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tdbkd" Feb 19 21:02:59 crc kubenswrapper[4886]: I0219 21:02:59.900363 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p885k\" (UniqueName: \"kubernetes.io/projected/86bcf599-6058-4716-a144-d2dcb927c498-kube-api-access-p885k\") pod \"86bcf599-6058-4716-a144-d2dcb927c498\" (UID: \"86bcf599-6058-4716-a144-d2dcb927c498\") " Feb 19 21:02:59 crc kubenswrapper[4886]: I0219 21:02:59.900430 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86bcf599-6058-4716-a144-d2dcb927c498-catalog-content\") pod \"86bcf599-6058-4716-a144-d2dcb927c498\" (UID: \"86bcf599-6058-4716-a144-d2dcb927c498\") " Feb 19 21:02:59 crc kubenswrapper[4886]: I0219 21:02:59.900457 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86bcf599-6058-4716-a144-d2dcb927c498-utilities\") pod \"86bcf599-6058-4716-a144-d2dcb927c498\" (UID: \"86bcf599-6058-4716-a144-d2dcb927c498\") " Feb 19 21:02:59 crc kubenswrapper[4886]: I0219 21:02:59.901319 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86bcf599-6058-4716-a144-d2dcb927c498-utilities" (OuterVolumeSpecName: "utilities") pod "86bcf599-6058-4716-a144-d2dcb927c498" (UID: "86bcf599-6058-4716-a144-d2dcb927c498"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:02:59 crc kubenswrapper[4886]: I0219 21:02:59.905821 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86bcf599-6058-4716-a144-d2dcb927c498-kube-api-access-p885k" (OuterVolumeSpecName: "kube-api-access-p885k") pod "86bcf599-6058-4716-a144-d2dcb927c498" (UID: "86bcf599-6058-4716-a144-d2dcb927c498"). InnerVolumeSpecName "kube-api-access-p885k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:02:59 crc kubenswrapper[4886]: I0219 21:02:59.934416 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86bcf599-6058-4716-a144-d2dcb927c498-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86bcf599-6058-4716-a144-d2dcb927c498" (UID: "86bcf599-6058-4716-a144-d2dcb927c498"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:02:59 crc kubenswrapper[4886]: I0219 21:02:59.936082 4886 generic.go:334] "Generic (PLEG): container finished" podID="86bcf599-6058-4716-a144-d2dcb927c498" containerID="8b95aabaeb5e00bb79e06de8cb1b67f11f651417b988a63e2439a23d8db5cc23" exitCode=0 Feb 19 21:02:59 crc kubenswrapper[4886]: I0219 21:02:59.936134 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tdbkd" Feb 19 21:02:59 crc kubenswrapper[4886]: I0219 21:02:59.936173 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tdbkd" event={"ID":"86bcf599-6058-4716-a144-d2dcb927c498","Type":"ContainerDied","Data":"8b95aabaeb5e00bb79e06de8cb1b67f11f651417b988a63e2439a23d8db5cc23"} Feb 19 21:02:59 crc kubenswrapper[4886]: I0219 21:02:59.936223 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tdbkd" event={"ID":"86bcf599-6058-4716-a144-d2dcb927c498","Type":"ContainerDied","Data":"6233a8b9f18a619b02d324f49fe6f596b65cf55fbec78c5f4112f5688edd2750"} Feb 19 21:02:59 crc kubenswrapper[4886]: I0219 21:02:59.936252 4886 scope.go:117] "RemoveContainer" containerID="8b95aabaeb5e00bb79e06de8cb1b67f11f651417b988a63e2439a23d8db5cc23" Feb 19 21:02:59 crc kubenswrapper[4886]: I0219 21:02:59.960154 4886 scope.go:117] "RemoveContainer" containerID="ea01683c17a1bae32bea9e09a679252d0e0ccbc67aa11952e7011559b34078de" Feb 19 21:02:59 crc kubenswrapper[4886]: I0219 21:02:59.962298 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tdbkd"] Feb 19 21:02:59 crc kubenswrapper[4886]: I0219 21:02:59.966716 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tdbkd"] Feb 19 21:02:59 crc kubenswrapper[4886]: I0219 21:02:59.986022 4886 scope.go:117] "RemoveContainer" containerID="60c811e4954d7703a8f176d97ed7e3b652e488a15c5a9b7ebe05a296b5b934ef" Feb 19 21:03:00 crc kubenswrapper[4886]: I0219 21:03:00.000946 4886 scope.go:117] "RemoveContainer" containerID="8b95aabaeb5e00bb79e06de8cb1b67f11f651417b988a63e2439a23d8db5cc23" Feb 19 21:03:00 crc kubenswrapper[4886]: E0219 21:03:00.001205 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b95aabaeb5e00bb79e06de8cb1b67f11f651417b988a63e2439a23d8db5cc23\": container with ID starting with 8b95aabaeb5e00bb79e06de8cb1b67f11f651417b988a63e2439a23d8db5cc23 not found: ID does not exist" containerID="8b95aabaeb5e00bb79e06de8cb1b67f11f651417b988a63e2439a23d8db5cc23" Feb 19 21:03:00 crc kubenswrapper[4886]: I0219 21:03:00.001241 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b95aabaeb5e00bb79e06de8cb1b67f11f651417b988a63e2439a23d8db5cc23"} err="failed to get container status \"8b95aabaeb5e00bb79e06de8cb1b67f11f651417b988a63e2439a23d8db5cc23\": rpc error: code = NotFound desc = could not find container \"8b95aabaeb5e00bb79e06de8cb1b67f11f651417b988a63e2439a23d8db5cc23\": container with ID starting with 8b95aabaeb5e00bb79e06de8cb1b67f11f651417b988a63e2439a23d8db5cc23 not found: ID does not exist" Feb 19 21:03:00 crc kubenswrapper[4886]: I0219 21:03:00.001279 4886 scope.go:117] "RemoveContainer" containerID="ea01683c17a1bae32bea9e09a679252d0e0ccbc67aa11952e7011559b34078de" Feb 19 21:03:00 crc kubenswrapper[4886]: E0219 21:03:00.001598 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea01683c17a1bae32bea9e09a679252d0e0ccbc67aa11952e7011559b34078de\": container with ID starting with ea01683c17a1bae32bea9e09a679252d0e0ccbc67aa11952e7011559b34078de not found: ID does not exist" containerID="ea01683c17a1bae32bea9e09a679252d0e0ccbc67aa11952e7011559b34078de" Feb 19 21:03:00 crc kubenswrapper[4886]: I0219 21:03:00.001671 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea01683c17a1bae32bea9e09a679252d0e0ccbc67aa11952e7011559b34078de"} err="failed to get container status \"ea01683c17a1bae32bea9e09a679252d0e0ccbc67aa11952e7011559b34078de\": rpc error: code = NotFound desc = could not find container \"ea01683c17a1bae32bea9e09a679252d0e0ccbc67aa11952e7011559b34078de\": container with ID starting with ea01683c17a1bae32bea9e09a679252d0e0ccbc67aa11952e7011559b34078de not found: ID does not exist" Feb 19 21:03:00 crc kubenswrapper[4886]: I0219 21:03:00.001713 4886 scope.go:117] "RemoveContainer" containerID="60c811e4954d7703a8f176d97ed7e3b652e488a15c5a9b7ebe05a296b5b934ef" Feb 19 21:03:00 crc kubenswrapper[4886]: I0219 21:03:00.001889 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86bcf599-6058-4716-a144-d2dcb927c498-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:00 crc kubenswrapper[4886]: I0219 21:03:00.001934 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86bcf599-6058-4716-a144-d2dcb927c498-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:00 crc kubenswrapper[4886]: I0219 21:03:00.001948 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p885k\" (UniqueName: \"kubernetes.io/projected/86bcf599-6058-4716-a144-d2dcb927c498-kube-api-access-p885k\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:00 crc kubenswrapper[4886]: E0219 21:03:00.002112 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60c811e4954d7703a8f176d97ed7e3b652e488a15c5a9b7ebe05a296b5b934ef\": container with ID starting with 60c811e4954d7703a8f176d97ed7e3b652e488a15c5a9b7ebe05a296b5b934ef not found: ID does not exist" containerID="60c811e4954d7703a8f176d97ed7e3b652e488a15c5a9b7ebe05a296b5b934ef" Feb 19 21:03:00 crc kubenswrapper[4886]: I0219 21:03:00.002146 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60c811e4954d7703a8f176d97ed7e3b652e488a15c5a9b7ebe05a296b5b934ef"} err="failed to get container status \"60c811e4954d7703a8f176d97ed7e3b652e488a15c5a9b7ebe05a296b5b934ef\": rpc error: code = NotFound desc = could not find container \"60c811e4954d7703a8f176d97ed7e3b652e488a15c5a9b7ebe05a296b5b934ef\": container with ID starting with 60c811e4954d7703a8f176d97ed7e3b652e488a15c5a9b7ebe05a296b5b934ef not found: ID does not exist" Feb 19 21:03:00 crc kubenswrapper[4886]: I0219 21:03:00.610923 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86bcf599-6058-4716-a144-d2dcb927c498" path="/var/lib/kubelet/pods/86bcf599-6058-4716-a144-d2dcb927c498/volumes" Feb 19 21:03:01 crc kubenswrapper[4886]: I0219 21:03:01.324759 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-chw2d"] Feb 19 21:03:01 crc kubenswrapper[4886]: I0219 21:03:01.325197 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-chw2d" podUID="13570438-2f8f-4ad0-8320-a6fa737bf816" containerName="registry-server" containerID="cri-o://e04313ba2ff43a116cfa51efea7977492e437edd8d7bfbe1cf4cbdeaa834bc91" gracePeriod=2 Feb 19 21:03:01 crc kubenswrapper[4886]: I0219 21:03:01.801786 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chw2d" Feb 19 21:03:01 crc kubenswrapper[4886]: I0219 21:03:01.827728 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13570438-2f8f-4ad0-8320-a6fa737bf816-utilities\") pod \"13570438-2f8f-4ad0-8320-a6fa737bf816\" (UID: \"13570438-2f8f-4ad0-8320-a6fa737bf816\") " Feb 19 21:03:01 crc kubenswrapper[4886]: I0219 21:03:01.827786 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkxk4\" (UniqueName: \"kubernetes.io/projected/13570438-2f8f-4ad0-8320-a6fa737bf816-kube-api-access-xkxk4\") pod \"13570438-2f8f-4ad0-8320-a6fa737bf816\" (UID: \"13570438-2f8f-4ad0-8320-a6fa737bf816\") " Feb 19 21:03:01 crc kubenswrapper[4886]: I0219 21:03:01.827829 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13570438-2f8f-4ad0-8320-a6fa737bf816-catalog-content\") pod \"13570438-2f8f-4ad0-8320-a6fa737bf816\" (UID: \"13570438-2f8f-4ad0-8320-a6fa737bf816\") " Feb 19 21:03:01 crc kubenswrapper[4886]: I0219 21:03:01.830649 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13570438-2f8f-4ad0-8320-a6fa737bf816-utilities" (OuterVolumeSpecName: "utilities") pod "13570438-2f8f-4ad0-8320-a6fa737bf816" (UID: "13570438-2f8f-4ad0-8320-a6fa737bf816"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:03:01 crc kubenswrapper[4886]: I0219 21:03:01.836128 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13570438-2f8f-4ad0-8320-a6fa737bf816-kube-api-access-xkxk4" (OuterVolumeSpecName: "kube-api-access-xkxk4") pod "13570438-2f8f-4ad0-8320-a6fa737bf816" (UID: "13570438-2f8f-4ad0-8320-a6fa737bf816"). InnerVolumeSpecName "kube-api-access-xkxk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:03:01 crc kubenswrapper[4886]: I0219 21:03:01.929171 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13570438-2f8f-4ad0-8320-a6fa737bf816-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:01 crc kubenswrapper[4886]: I0219 21:03:01.929203 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkxk4\" (UniqueName: \"kubernetes.io/projected/13570438-2f8f-4ad0-8320-a6fa737bf816-kube-api-access-xkxk4\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:01 crc kubenswrapper[4886]: I0219 21:03:01.958060 4886 generic.go:334] "Generic (PLEG): container finished" podID="13570438-2f8f-4ad0-8320-a6fa737bf816" containerID="e04313ba2ff43a116cfa51efea7977492e437edd8d7bfbe1cf4cbdeaa834bc91" exitCode=0 Feb 19 21:03:01 crc kubenswrapper[4886]: I0219 21:03:01.958146 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chw2d" event={"ID":"13570438-2f8f-4ad0-8320-a6fa737bf816","Type":"ContainerDied","Data":"e04313ba2ff43a116cfa51efea7977492e437edd8d7bfbe1cf4cbdeaa834bc91"} Feb 19 21:03:01 crc kubenswrapper[4886]: I0219 21:03:01.958500 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chw2d" event={"ID":"13570438-2f8f-4ad0-8320-a6fa737bf816","Type":"ContainerDied","Data":"b3f04439f7532627f547146e982ecaab40ed6433bacb88bc212d809585c1e394"} Feb 19 21:03:01 crc kubenswrapper[4886]: I0219 21:03:01.958538 4886 scope.go:117] "RemoveContainer" containerID="e04313ba2ff43a116cfa51efea7977492e437edd8d7bfbe1cf4cbdeaa834bc91" Feb 19 21:03:01 crc kubenswrapper[4886]: I0219 21:03:01.958205 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chw2d" Feb 19 21:03:01 crc kubenswrapper[4886]: I0219 21:03:01.985647 4886 scope.go:117] "RemoveContainer" containerID="fbebed6f7fff4a2e04b860e7feb1b31070961ff83cab5bb5721f93ab3234afb9" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.005394 4886 scope.go:117] "RemoveContainer" containerID="0350bdce40907c44664f35ad4f8d8bcba30691cbf147e2200e64eef35101dffb" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.017084 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13570438-2f8f-4ad0-8320-a6fa737bf816-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13570438-2f8f-4ad0-8320-a6fa737bf816" (UID: "13570438-2f8f-4ad0-8320-a6fa737bf816"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.024237 4886 scope.go:117] "RemoveContainer" containerID="e04313ba2ff43a116cfa51efea7977492e437edd8d7bfbe1cf4cbdeaa834bc91" Feb 19 21:03:02 crc kubenswrapper[4886]: E0219 21:03:02.024776 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e04313ba2ff43a116cfa51efea7977492e437edd8d7bfbe1cf4cbdeaa834bc91\": container with ID starting with e04313ba2ff43a116cfa51efea7977492e437edd8d7bfbe1cf4cbdeaa834bc91 not found: ID does not exist" containerID="e04313ba2ff43a116cfa51efea7977492e437edd8d7bfbe1cf4cbdeaa834bc91" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.024831 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e04313ba2ff43a116cfa51efea7977492e437edd8d7bfbe1cf4cbdeaa834bc91"} err="failed to get container status \"e04313ba2ff43a116cfa51efea7977492e437edd8d7bfbe1cf4cbdeaa834bc91\": rpc error: code = NotFound desc = could not find container \"e04313ba2ff43a116cfa51efea7977492e437edd8d7bfbe1cf4cbdeaa834bc91\": container with ID starting with e04313ba2ff43a116cfa51efea7977492e437edd8d7bfbe1cf4cbdeaa834bc91 not found: ID does not exist" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.024865 4886 scope.go:117] "RemoveContainer" containerID="fbebed6f7fff4a2e04b860e7feb1b31070961ff83cab5bb5721f93ab3234afb9" Feb 19 21:03:02 crc kubenswrapper[4886]: E0219 21:03:02.025431 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbebed6f7fff4a2e04b860e7feb1b31070961ff83cab5bb5721f93ab3234afb9\": container with ID starting with fbebed6f7fff4a2e04b860e7feb1b31070961ff83cab5bb5721f93ab3234afb9 not found: ID does not exist" containerID="fbebed6f7fff4a2e04b860e7feb1b31070961ff83cab5bb5721f93ab3234afb9" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.025520 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbebed6f7fff4a2e04b860e7feb1b31070961ff83cab5bb5721f93ab3234afb9"} err="failed to get container status \"fbebed6f7fff4a2e04b860e7feb1b31070961ff83cab5bb5721f93ab3234afb9\": rpc error: code = NotFound desc = could not find container \"fbebed6f7fff4a2e04b860e7feb1b31070961ff83cab5bb5721f93ab3234afb9\": container with ID starting with fbebed6f7fff4a2e04b860e7feb1b31070961ff83cab5bb5721f93ab3234afb9 not found: ID does not exist" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.025566 4886 scope.go:117] "RemoveContainer" containerID="0350bdce40907c44664f35ad4f8d8bcba30691cbf147e2200e64eef35101dffb" Feb 19 21:03:02 crc kubenswrapper[4886]: E0219 21:03:02.026014 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0350bdce40907c44664f35ad4f8d8bcba30691cbf147e2200e64eef35101dffb\": container with ID starting with 0350bdce40907c44664f35ad4f8d8bcba30691cbf147e2200e64eef35101dffb not found: ID does not exist" containerID="0350bdce40907c44664f35ad4f8d8bcba30691cbf147e2200e64eef35101dffb" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.026097 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0350bdce40907c44664f35ad4f8d8bcba30691cbf147e2200e64eef35101dffb"} err="failed to get container status \"0350bdce40907c44664f35ad4f8d8bcba30691cbf147e2200e64eef35101dffb\": rpc error: code = NotFound desc = could not find container \"0350bdce40907c44664f35ad4f8d8bcba30691cbf147e2200e64eef35101dffb\": container with ID starting with 0350bdce40907c44664f35ad4f8d8bcba30691cbf147e2200e64eef35101dffb not found: ID does not exist" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.030701 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13570438-2f8f-4ad0-8320-a6fa737bf816-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.303564 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-chw2d"] Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.309052 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-chw2d"] Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.461535 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" podUID="e42dbc11-eae0-4ed9-a653-304ed853ada3" containerName="oauth-openshift" containerID="cri-o://ae2633f181a6f087d6b51b9e646b37a2069f47dda22d35fa2edfdd82b90dda0b" gracePeriod=15 Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.610837 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13570438-2f8f-4ad0-8320-a6fa737bf816" path="/var/lib/kubelet/pods/13570438-2f8f-4ad0-8320-a6fa737bf816/volumes" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.901071 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.949238 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-router-certs\") pod \"e42dbc11-eae0-4ed9-a653-304ed853ada3\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.949300 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-trusted-ca-bundle\") pod \"e42dbc11-eae0-4ed9-a653-304ed853ada3\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.949324 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e42dbc11-eae0-4ed9-a653-304ed853ada3-audit-dir\") pod \"e42dbc11-eae0-4ed9-a653-304ed853ada3\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.949354 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-session\") pod \"e42dbc11-eae0-4ed9-a653-304ed853ada3\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.949376 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e42dbc11-eae0-4ed9-a653-304ed853ada3-audit-policies\") pod \"e42dbc11-eae0-4ed9-a653-304ed853ada3\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.949395 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-user-idp-0-file-data\") pod \"e42dbc11-eae0-4ed9-a653-304ed853ada3\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.949414 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-ocp-branding-template\") pod \"e42dbc11-eae0-4ed9-a653-304ed853ada3\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.949415 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e42dbc11-eae0-4ed9-a653-304ed853ada3-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e42dbc11-eae0-4ed9-a653-304ed853ada3" (UID: "e42dbc11-eae0-4ed9-a653-304ed853ada3"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.949431 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-user-template-provider-selection\") pod \"e42dbc11-eae0-4ed9-a653-304ed853ada3\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.949508 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-service-ca\") pod \"e42dbc11-eae0-4ed9-a653-304ed853ada3\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.949548 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxqsd\" (UniqueName: \"kubernetes.io/projected/e42dbc11-eae0-4ed9-a653-304ed853ada3-kube-api-access-xxqsd\") pod \"e42dbc11-eae0-4ed9-a653-304ed853ada3\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.949576 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-user-template-login\") pod \"e42dbc11-eae0-4ed9-a653-304ed853ada3\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.949632 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-serving-cert\") pod \"e42dbc11-eae0-4ed9-a653-304ed853ada3\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.949655 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-user-template-error\") pod \"e42dbc11-eae0-4ed9-a653-304ed853ada3\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.949677 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-cliconfig\") pod \"e42dbc11-eae0-4ed9-a653-304ed853ada3\" (UID: \"e42dbc11-eae0-4ed9-a653-304ed853ada3\") " Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.949989 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e42dbc11-eae0-4ed9-a653-304ed853ada3-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e42dbc11-eae0-4ed9-a653-304ed853ada3" (UID: "e42dbc11-eae0-4ed9-a653-304ed853ada3"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.950013 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e42dbc11-eae0-4ed9-a653-304ed853ada3" (UID: "e42dbc11-eae0-4ed9-a653-304ed853ada3"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.950056 4886 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e42dbc11-eae0-4ed9-a653-304ed853ada3-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.950406 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e42dbc11-eae0-4ed9-a653-304ed853ada3" (UID: "e42dbc11-eae0-4ed9-a653-304ed853ada3"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.950658 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e42dbc11-eae0-4ed9-a653-304ed853ada3" (UID: "e42dbc11-eae0-4ed9-a653-304ed853ada3"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.955066 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "e42dbc11-eae0-4ed9-a653-304ed853ada3" (UID: "e42dbc11-eae0-4ed9-a653-304ed853ada3"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.955209 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e42dbc11-eae0-4ed9-a653-304ed853ada3" (UID: "e42dbc11-eae0-4ed9-a653-304ed853ada3"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.955611 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e42dbc11-eae0-4ed9-a653-304ed853ada3" (UID: "e42dbc11-eae0-4ed9-a653-304ed853ada3"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.955656 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e42dbc11-eae0-4ed9-a653-304ed853ada3-kube-api-access-xxqsd" (OuterVolumeSpecName: "kube-api-access-xxqsd") pod "e42dbc11-eae0-4ed9-a653-304ed853ada3" (UID: "e42dbc11-eae0-4ed9-a653-304ed853ada3"). InnerVolumeSpecName "kube-api-access-xxqsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.955778 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e42dbc11-eae0-4ed9-a653-304ed853ada3" (UID: "e42dbc11-eae0-4ed9-a653-304ed853ada3"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.955936 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e42dbc11-eae0-4ed9-a653-304ed853ada3" (UID: "e42dbc11-eae0-4ed9-a653-304ed853ada3"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.956196 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e42dbc11-eae0-4ed9-a653-304ed853ada3" (UID: "e42dbc11-eae0-4ed9-a653-304ed853ada3"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.956250 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e42dbc11-eae0-4ed9-a653-304ed853ada3" (UID: "e42dbc11-eae0-4ed9-a653-304ed853ada3"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.959566 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e42dbc11-eae0-4ed9-a653-304ed853ada3" (UID: "e42dbc11-eae0-4ed9-a653-304ed853ada3"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.969015 4886 generic.go:334] "Generic (PLEG): container finished" podID="e42dbc11-eae0-4ed9-a653-304ed853ada3" containerID="ae2633f181a6f087d6b51b9e646b37a2069f47dda22d35fa2edfdd82b90dda0b" exitCode=0 Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.969052 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" event={"ID":"e42dbc11-eae0-4ed9-a653-304ed853ada3","Type":"ContainerDied","Data":"ae2633f181a6f087d6b51b9e646b37a2069f47dda22d35fa2edfdd82b90dda0b"} Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.969077 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" event={"ID":"e42dbc11-eae0-4ed9-a653-304ed853ada3","Type":"ContainerDied","Data":"e07229e732b34ead6e23eaedb6fb6fcb67b3e6f0151ea9608319b812707d6aee"} Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.969093 4886 scope.go:117] "RemoveContainer" containerID="ae2633f181a6f087d6b51b9e646b37a2069f47dda22d35fa2edfdd82b90dda0b" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.969176 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5f4xd" Feb 19 21:03:02 crc kubenswrapper[4886]: I0219 21:03:02.999283 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5f4xd"] Feb 19 21:03:03 crc kubenswrapper[4886]: I0219 21:03:02.999845 4886 scope.go:117] "RemoveContainer" containerID="ae2633f181a6f087d6b51b9e646b37a2069f47dda22d35fa2edfdd82b90dda0b" Feb 19 21:03:03 crc kubenswrapper[4886]: E0219 21:03:03.001123 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae2633f181a6f087d6b51b9e646b37a2069f47dda22d35fa2edfdd82b90dda0b\": container with ID starting with ae2633f181a6f087d6b51b9e646b37a2069f47dda22d35fa2edfdd82b90dda0b not found: ID does not exist" containerID="ae2633f181a6f087d6b51b9e646b37a2069f47dda22d35fa2edfdd82b90dda0b" Feb 19 21:03:03 crc kubenswrapper[4886]: I0219 21:03:03.001431 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae2633f181a6f087d6b51b9e646b37a2069f47dda22d35fa2edfdd82b90dda0b"} err="failed to get container status \"ae2633f181a6f087d6b51b9e646b37a2069f47dda22d35fa2edfdd82b90dda0b\": rpc error: code = NotFound desc = could not find container \"ae2633f181a6f087d6b51b9e646b37a2069f47dda22d35fa2edfdd82b90dda0b\": container with ID starting with ae2633f181a6f087d6b51b9e646b37a2069f47dda22d35fa2edfdd82b90dda0b not found: ID does not exist" Feb 19 21:03:03 crc kubenswrapper[4886]: I0219 21:03:03.002314 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5f4xd"] Feb 19 21:03:03 crc kubenswrapper[4886]: I0219 21:03:03.051093 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:03 crc kubenswrapper[4886]: I0219 21:03:03.051217 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:03 crc kubenswrapper[4886]: I0219 21:03:03.051306 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:03 crc kubenswrapper[4886]: I0219 21:03:03.051375 4886 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e42dbc11-eae0-4ed9-a653-304ed853ada3-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:03 crc kubenswrapper[4886]: I0219 21:03:03.051439 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:03 crc kubenswrapper[4886]: I0219 21:03:03.051528 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:03 crc kubenswrapper[4886]: I0219 21:03:03.051597 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:03 crc kubenswrapper[4886]: I0219 21:03:03.051664 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:03 crc kubenswrapper[4886]: I0219 21:03:03.051729 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxqsd\" (UniqueName: \"kubernetes.io/projected/e42dbc11-eae0-4ed9-a653-304ed853ada3-kube-api-access-xxqsd\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:03 crc kubenswrapper[4886]: I0219 21:03:03.051792 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:03 crc kubenswrapper[4886]: I0219 21:03:03.051861 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:03 crc kubenswrapper[4886]: I0219 21:03:03.051925 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:03 crc kubenswrapper[4886]: I0219 21:03:03.051995 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e42dbc11-eae0-4ed9-a653-304ed853ada3-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:04 crc kubenswrapper[4886]: I0219 21:03:04.615819 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e42dbc11-eae0-4ed9-a653-304ed853ada3" path="/var/lib/kubelet/pods/e42dbc11-eae0-4ed9-a653-304ed853ada3/volumes" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.525325 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-79b5c48459-6tqsd"] Feb 19 21:03:08 crc kubenswrapper[4886]: E0219 21:03:08.525920 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e42dbc11-eae0-4ed9-a653-304ed853ada3" containerName="oauth-openshift" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.525943 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e42dbc11-eae0-4ed9-a653-304ed853ada3" containerName="oauth-openshift" Feb 19 21:03:08 crc kubenswrapper[4886]: E0219 21:03:08.525975 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86bcf599-6058-4716-a144-d2dcb927c498" containerName="registry-server" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.525987 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="86bcf599-6058-4716-a144-d2dcb927c498" containerName="registry-server" Feb 19 21:03:08 crc kubenswrapper[4886]: E0219 21:03:08.526015 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13570438-2f8f-4ad0-8320-a6fa737bf816" containerName="extract-utilities" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.526027 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="13570438-2f8f-4ad0-8320-a6fa737bf816" containerName="extract-utilities" Feb 19 21:03:08 crc kubenswrapper[4886]: E0219 21:03:08.526043 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13570438-2f8f-4ad0-8320-a6fa737bf816" containerName="extract-content" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.526055 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="13570438-2f8f-4ad0-8320-a6fa737bf816" containerName="extract-content" Feb 19 21:03:08 crc kubenswrapper[4886]: E0219 21:03:08.526072 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86bcf599-6058-4716-a144-d2dcb927c498" containerName="extract-content" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.526084 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="86bcf599-6058-4716-a144-d2dcb927c498" containerName="extract-content" Feb 19 21:03:08 crc kubenswrapper[4886]: E0219 21:03:08.526103 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86bcf599-6058-4716-a144-d2dcb927c498" containerName="extract-utilities" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.526117 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="86bcf599-6058-4716-a144-d2dcb927c498" containerName="extract-utilities" Feb 19 21:03:08 crc kubenswrapper[4886]: E0219 21:03:08.526132 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13570438-2f8f-4ad0-8320-a6fa737bf816" containerName="registry-server" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.526145 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="13570438-2f8f-4ad0-8320-a6fa737bf816" containerName="registry-server" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.526337 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="13570438-2f8f-4ad0-8320-a6fa737bf816" containerName="registry-server" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.526366 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="e42dbc11-eae0-4ed9-a653-304ed853ada3" containerName="oauth-openshift" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.526390 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="86bcf599-6058-4716-a144-d2dcb927c498" containerName="registry-server" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.526962 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.533573 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.533702 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.533702 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.533994 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.534060 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.533938 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.534404 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.536002 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.536273 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.536615 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.536688 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.537182 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.546890 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.561641 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.576889 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79b5c48459-6tqsd"] Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.602307 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.668170 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.668265 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-user-template-error\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.668332 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.668532 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-user-template-login\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.668620 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.668683 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-748ng\" (UniqueName: \"kubernetes.io/projected/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-kube-api-access-748ng\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.668785 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.668857 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-audit-dir\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.668883 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.668911 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-audit-policies\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.668934 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.668957 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-system-session\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.668988 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.669019 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.770931 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-user-template-error\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.771059 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.771138 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-user-template-login\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.771182 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.771221 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-748ng\" (UniqueName: \"kubernetes.io/projected/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-kube-api-access-748ng\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.771297 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.771349 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-audit-dir\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.771384 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.771417 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-audit-policies\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.771449 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.771480 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-system-session\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.771516 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.771552 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.771595 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.772316 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-audit-dir\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.772869 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.772999 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-audit-policies\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.774730 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.775515 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.778869 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.780125 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.781018 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.781517 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.781748 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-system-session\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.783168 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-user-template-login\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.784008 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.789500 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-v4-0-config-user-template-error\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.803092 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-748ng\" (UniqueName: \"kubernetes.io/projected/1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6-kube-api-access-748ng\") pod \"oauth-openshift-79b5c48459-6tqsd\" (UID: \"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:08 crc kubenswrapper[4886]: I0219 21:03:08.867208 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:09 crc kubenswrapper[4886]: I0219 21:03:09.358554 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79b5c48459-6tqsd"] Feb 19 21:03:10 crc kubenswrapper[4886]: I0219 21:03:10.019909 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" event={"ID":"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6","Type":"ContainerStarted","Data":"90893d6f3ab89232e436d6eb9a88fa498ca207e72b0039fa3f7c39d14f34d981"} Feb 19 21:03:10 crc kubenswrapper[4886]: I0219 21:03:10.020416 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:10 crc kubenswrapper[4886]: I0219 21:03:10.020440 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" event={"ID":"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6","Type":"ContainerStarted","Data":"61ce2935615fedd6007b570fcbd84498c054b36c02d70438403c216cf2dbcbec"} Feb 19 21:03:10 crc kubenswrapper[4886]: I0219 21:03:10.054162 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" podStartSLOduration=33.054144257 podStartE2EDuration="33.054144257s" podCreationTimestamp="2026-02-19 21:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:03:10.048956627 +0000 UTC m=+220.676799687" watchObservedRunningTime="2026-02-19 21:03:10.054144257 +0000 UTC m=+220.681987317" Feb 19 21:03:10 crc kubenswrapper[4886]: I0219 21:03:10.273027 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 21:03:18 crc kubenswrapper[4886]: I0219 21:03:18.325249 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:03:18 crc kubenswrapper[4886]: I0219 21:03:18.326113 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:03:18 crc kubenswrapper[4886]: I0219 21:03:18.326194 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 21:03:18 crc kubenswrapper[4886]: I0219 21:03:18.327309 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af"} pod="openshift-machine-config-operator/machine-config-daemon-6stm5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 21:03:18 crc kubenswrapper[4886]: I0219 21:03:18.327448 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" containerID="cri-o://2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af" gracePeriod=600 Feb 19 21:03:19 crc kubenswrapper[4886]: I0219 21:03:19.083614 4886 generic.go:334] "Generic (PLEG): container finished" podID="b096c32d-4192-4529-bc55-b05d09004007" containerID="2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af" exitCode=0 Feb 19 21:03:19 crc kubenswrapper[4886]: I0219 21:03:19.083653 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerDied","Data":"2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af"} Feb 19 21:03:19 crc kubenswrapper[4886]: I0219 21:03:19.083678 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerStarted","Data":"3464edb1ae3ca2be01c37ffa7dd5b104876610570320b94e212300c87c30c890"} Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.310882 4886 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.311703 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b" gracePeriod=15 Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.311869 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196" gracePeriod=15 Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.311923 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a" gracePeriod=15 Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.311967 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c" gracePeriod=15 Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.312012 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6" gracePeriod=15 Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.314583 4886 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 21:03:21 crc kubenswrapper[4886]: E0219 21:03:21.314919 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.314936 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 21:03:21 crc kubenswrapper[4886]: E0219 21:03:21.314953 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.314962 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 21:03:21 crc kubenswrapper[4886]: E0219 21:03:21.314998 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.315007 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 21:03:21 crc kubenswrapper[4886]: E0219 21:03:21.315020 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.315028 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 21:03:21 crc kubenswrapper[4886]: E0219 21:03:21.315048 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.315082 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 21:03:21 crc kubenswrapper[4886]: E0219 21:03:21.315096 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.315105 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 21:03:21 crc kubenswrapper[4886]: E0219 21:03:21.315114 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.315124 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.315305 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.315322 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.315336 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.315373 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.315383 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.315397 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 21:03:21 crc kubenswrapper[4886]: E0219 21:03:21.315616 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.315628 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.316147 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.317490 4886 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.318047 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.321687 4886 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.456864 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.457207 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.457242 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.457262 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.457305 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.457326 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.457349 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.457378 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.558724 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.558794 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.558850 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.558890 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.558931 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.558971 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.558997 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.558941 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.559026 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.559167 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.559248 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.559290 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.559305 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.559314 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.559336 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 21:03:21 crc kubenswrapper[4886]: I0219 21:03:21.559446 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 21:03:22 crc kubenswrapper[4886]: I0219 21:03:22.103786 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 21:03:22 crc kubenswrapper[4886]: I0219 21:03:22.105001 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 21:03:22 crc kubenswrapper[4886]: I0219 21:03:22.105585 4886 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196" exitCode=0 Feb 19 21:03:22 crc kubenswrapper[4886]: I0219 21:03:22.105610 4886 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a" exitCode=0 Feb 19 21:03:22 crc kubenswrapper[4886]: I0219 21:03:22.105618 4886 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c" exitCode=0 Feb 19 21:03:22 crc kubenswrapper[4886]: I0219 21:03:22.105627 4886 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6" exitCode=2 Feb 19 21:03:22 crc kubenswrapper[4886]: I0219 21:03:22.105695 4886 scope.go:117] "RemoveContainer" containerID="fc001703320e3b190b1697fd1d3eb5a8a9c5fc601f136697be5c17a0de7ec666" Feb 19 21:03:22 crc kubenswrapper[4886]: I0219 21:03:22.108586 4886 generic.go:334] "Generic (PLEG): container finished" podID="a3afcbe4-42fa-4910-9365-a0b6891fa49b" containerID="775f7691ab17676a28da2ff9fb92cada92dfa841d7dd3bbdcd2928478a1d68ae" exitCode=0 Feb 19 21:03:22 crc kubenswrapper[4886]: I0219 21:03:22.108614 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a3afcbe4-42fa-4910-9365-a0b6891fa49b","Type":"ContainerDied","Data":"775f7691ab17676a28da2ff9fb92cada92dfa841d7dd3bbdcd2928478a1d68ae"} Feb 19 21:03:22 crc kubenswrapper[4886]: I0219 21:03:22.109399 4886 status_manager.go:851] "Failed to get status for pod" podUID="a3afcbe4-42fa-4910-9365-a0b6891fa49b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Feb 19 21:03:23 crc kubenswrapper[4886]: I0219 21:03:23.130034 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 21:03:23 crc kubenswrapper[4886]: I0219 21:03:23.473450 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 21:03:23 crc kubenswrapper[4886]: I0219 21:03:23.474457 4886 status_manager.go:851] "Failed to get status for pod" podUID="a3afcbe4-42fa-4910-9365-a0b6891fa49b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Feb 19 21:03:23 crc kubenswrapper[4886]: I0219 21:03:23.582922 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3afcbe4-42fa-4910-9365-a0b6891fa49b-kube-api-access\") pod \"a3afcbe4-42fa-4910-9365-a0b6891fa49b\" (UID: \"a3afcbe4-42fa-4910-9365-a0b6891fa49b\") " Feb 19 21:03:23 crc kubenswrapper[4886]: I0219 21:03:23.582985 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3afcbe4-42fa-4910-9365-a0b6891fa49b-kubelet-dir\") pod \"a3afcbe4-42fa-4910-9365-a0b6891fa49b\" (UID: \"a3afcbe4-42fa-4910-9365-a0b6891fa49b\") " Feb 19 21:03:23 crc kubenswrapper[4886]: I0219 21:03:23.583012 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3afcbe4-42fa-4910-9365-a0b6891fa49b-var-lock\") pod \"a3afcbe4-42fa-4910-9365-a0b6891fa49b\" (UID: \"a3afcbe4-42fa-4910-9365-a0b6891fa49b\") " Feb 19 21:03:23 crc kubenswrapper[4886]: I0219 21:03:23.583109 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3afcbe4-42fa-4910-9365-a0b6891fa49b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a3afcbe4-42fa-4910-9365-a0b6891fa49b" (UID: "a3afcbe4-42fa-4910-9365-a0b6891fa49b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:03:23 crc kubenswrapper[4886]: I0219 21:03:23.583144 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3afcbe4-42fa-4910-9365-a0b6891fa49b-var-lock" (OuterVolumeSpecName: "var-lock") pod "a3afcbe4-42fa-4910-9365-a0b6891fa49b" (UID: "a3afcbe4-42fa-4910-9365-a0b6891fa49b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:03:23 crc kubenswrapper[4886]: I0219 21:03:23.583296 4886 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3afcbe4-42fa-4910-9365-a0b6891fa49b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:23 crc kubenswrapper[4886]: I0219 21:03:23.583314 4886 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3afcbe4-42fa-4910-9365-a0b6891fa49b-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:23 crc kubenswrapper[4886]: I0219 21:03:23.591070 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3afcbe4-42fa-4910-9365-a0b6891fa49b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a3afcbe4-42fa-4910-9365-a0b6891fa49b" (UID: "a3afcbe4-42fa-4910-9365-a0b6891fa49b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:03:23 crc kubenswrapper[4886]: I0219 21:03:23.678846 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 21:03:23 crc kubenswrapper[4886]: I0219 21:03:23.679680 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 21:03:23 crc kubenswrapper[4886]: I0219 21:03:23.680362 4886 status_manager.go:851] "Failed to get status for pod" podUID="a3afcbe4-42fa-4910-9365-a0b6891fa49b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Feb 19 21:03:23 crc kubenswrapper[4886]: I0219 21:03:23.681005 4886 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Feb 19 21:03:23 crc kubenswrapper[4886]: I0219 21:03:23.684555 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3afcbe4-42fa-4910-9365-a0b6891fa49b-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:23 crc kubenswrapper[4886]: I0219 21:03:23.785462 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 21:03:23 crc kubenswrapper[4886]: I0219 21:03:23.785597 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 21:03:23 crc kubenswrapper[4886]: I0219 21:03:23.785590 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:03:23 crc kubenswrapper[4886]: I0219 21:03:23.785651 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 21:03:23 crc kubenswrapper[4886]: I0219 21:03:23.785689 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:03:23 crc kubenswrapper[4886]: I0219 21:03:23.785842 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:03:23 crc kubenswrapper[4886]: I0219 21:03:23.785979 4886 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:23 crc kubenswrapper[4886]: I0219 21:03:23.786001 4886 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:23 crc kubenswrapper[4886]: I0219 21:03:23.786019 4886 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.139025 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.140027 4886 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b" exitCode=0 Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.140106 4886 scope.go:117] "RemoveContainer" containerID="b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.140212 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.141834 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a3afcbe4-42fa-4910-9365-a0b6891fa49b","Type":"ContainerDied","Data":"edf8174b39d785d4f2d346c85e3c239929cc8879c4909855d990ce49d0b75600"} Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.141861 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edf8174b39d785d4f2d346c85e3c239929cc8879c4909855d990ce49d0b75600" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.141906 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.155461 4886 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.156055 4886 status_manager.go:851] "Failed to get status for pod" podUID="a3afcbe4-42fa-4910-9365-a0b6891fa49b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.159751 4886 scope.go:117] "RemoveContainer" containerID="908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.170835 4886 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.171450 4886 status_manager.go:851] "Failed to get status for pod" podUID="a3afcbe4-42fa-4910-9365-a0b6891fa49b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.176415 4886 scope.go:117] "RemoveContainer" containerID="5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.195801 4886 scope.go:117] "RemoveContainer" containerID="fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.216217 4886 scope.go:117] "RemoveContainer" containerID="8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.240020 4886 scope.go:117] "RemoveContainer" containerID="efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.267920 4886 scope.go:117] "RemoveContainer" containerID="b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196" Feb 19 21:03:24 crc kubenswrapper[4886]: E0219 21:03:24.268519 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196\": container with ID starting with b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196 not found: ID does not exist" containerID="b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.268577 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196"} err="failed to get container status \"b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196\": rpc error: code = NotFound desc = could not find container \"b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196\": container with ID starting with b336d894087de8e48f4a58bf0116137b834a56cb9deac5a43b3580668afa5196 not found: ID does not exist" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.268610 4886 scope.go:117] "RemoveContainer" containerID="908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a" Feb 19 21:03:24 crc kubenswrapper[4886]: E0219 21:03:24.269147 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\": container with ID starting with 908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a not found: ID does not exist" containerID="908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.269186 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a"} err="failed to get container status \"908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\": rpc error: code = NotFound desc = could not find container \"908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a\": container with ID starting with 908cfee4d9d0baee27b4f4e747138b51401c79a9c9b938bac77468537895db7a not found: ID does not exist" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.269213 4886 scope.go:117] "RemoveContainer" containerID="5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c" Feb 19 21:03:24 crc kubenswrapper[4886]: E0219 21:03:24.269878 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\": container with ID starting with 5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c not found: ID does not exist" containerID="5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.269941 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c"} err="failed to get container status \"5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\": rpc error: code = NotFound desc = could not find container \"5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c\": container with ID starting with 5c4cb8bc50ab8dde6be204f3921e0acd24105775b4f9bf5f55d08421468c000c not found: ID does not exist" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.269961 4886 scope.go:117] "RemoveContainer" containerID="fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6" Feb 19 21:03:24 crc kubenswrapper[4886]: E0219 21:03:24.270303 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\": container with ID starting with fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6 not found: ID does not exist" containerID="fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.270577 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6"} err="failed to get container status \"fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\": rpc error: code = NotFound desc = could not find container \"fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6\": container with ID starting with fbb012ab73c634f50651873e4b3058968b03cd488f6098654bdd78ade37597a6 not found: ID does not exist" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.270601 4886 scope.go:117] "RemoveContainer" containerID="8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b" Feb 19 21:03:24 crc kubenswrapper[4886]: E0219 21:03:24.271250 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\": container with ID starting with 8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b not found: ID does not exist" containerID="8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.271544 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b"} err="failed to get container status \"8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\": rpc error: code = NotFound desc = could not find container \"8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b\": container with ID starting with 8518955bff67a79ac2c77181665b7b67bcf32279e7dd0b3c07fea7f89d880f2b not found: ID does not exist" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.271588 4886 scope.go:117] "RemoveContainer" containerID="efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f" Feb 19 21:03:24 crc kubenswrapper[4886]: E0219 21:03:24.271972 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\": container with ID starting with efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f not found: ID does not exist" containerID="efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.272001 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f"} err="failed to get container status \"efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\": rpc error: code = NotFound desc = could not find container \"efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f\": container with ID starting with efc3ef072bb6c03e4041c4eb38b9ab3b6139d439abd17b966bda4309c2cee39f not found: ID does not exist" Feb 19 21:03:24 crc kubenswrapper[4886]: I0219 21:03:24.606392 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 19 21:03:26 crc kubenswrapper[4886]: E0219 21:03:26.356203 4886 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.30:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 21:03:26 crc kubenswrapper[4886]: I0219 21:03:26.357061 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 21:03:26 crc kubenswrapper[4886]: E0219 21:03:26.389864 4886 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.30:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1895c1b3cb017757 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 21:03:26.389335895 +0000 UTC m=+237.017178985,LastTimestamp:2026-02-19 21:03:26.389335895 +0000 UTC m=+237.017178985,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 21:03:27 crc kubenswrapper[4886]: I0219 21:03:27.168845 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b97c8cf3921060b4929e3266bba171b48f0dc69f36c09b73b6fa642aed0a07e6"} Feb 19 21:03:27 crc kubenswrapper[4886]: I0219 21:03:27.169213 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"2ae49f9dae3095c072fbe578300612942864eb05ee710db558ab3a474495a23d"} Feb 19 21:03:27 crc kubenswrapper[4886]: E0219 21:03:27.170087 4886 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.30:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 21:03:27 crc kubenswrapper[4886]: I0219 21:03:27.170349 4886 status_manager.go:851] "Failed to get status for pod" podUID="a3afcbe4-42fa-4910-9365-a0b6891fa49b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Feb 19 21:03:27 crc kubenswrapper[4886]: E0219 21:03:27.367887 4886 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Feb 19 21:03:27 crc kubenswrapper[4886]: E0219 21:03:27.368372 4886 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Feb 19 21:03:27 crc kubenswrapper[4886]: E0219 21:03:27.368841 4886 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Feb 19 21:03:27 crc kubenswrapper[4886]: E0219 21:03:27.369083 4886 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Feb 19 21:03:27 crc kubenswrapper[4886]: E0219 21:03:27.369351 4886 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Feb 19 21:03:27 crc kubenswrapper[4886]: I0219 21:03:27.369405 4886 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 19 21:03:27 crc kubenswrapper[4886]: E0219 21:03:27.369763 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="200ms" Feb 19 21:03:27 crc kubenswrapper[4886]: E0219 21:03:27.570922 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="400ms" Feb 19 21:03:27 crc kubenswrapper[4886]: E0219 21:03:27.689394 4886 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.30:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" volumeName="registry-storage" Feb 19 21:03:27 crc kubenswrapper[4886]: E0219 21:03:27.971964 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="800ms" Feb 19 21:03:28 crc kubenswrapper[4886]: E0219 21:03:28.416475 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:03:28Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:03:28Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:03:28Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T21:03:28Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Feb 19 21:03:28 crc kubenswrapper[4886]: E0219 21:03:28.417318 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Feb 19 21:03:28 crc kubenswrapper[4886]: E0219 21:03:28.417956 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Feb 19 21:03:28 crc kubenswrapper[4886]: E0219 21:03:28.418480 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Feb 19 21:03:28 crc kubenswrapper[4886]: E0219 21:03:28.419104 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Feb 19 21:03:28 crc kubenswrapper[4886]: E0219 21:03:28.419142 4886 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 21:03:28 crc kubenswrapper[4886]: E0219 21:03:28.773831 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="1.6s" Feb 19 21:03:29 crc kubenswrapper[4886]: E0219 21:03:29.569092 4886 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.30:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1895c1b3cb017757 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 21:03:26.389335895 +0000 UTC m=+237.017178985,LastTimestamp:2026-02-19 21:03:26.389335895 +0000 UTC m=+237.017178985,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 21:03:30 crc kubenswrapper[4886]: E0219 21:03:30.374595 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="3.2s" Feb 19 21:03:30 crc kubenswrapper[4886]: I0219 21:03:30.617051 4886 status_manager.go:851] "Failed to get status for pod" podUID="a3afcbe4-42fa-4910-9365-a0b6891fa49b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Feb 19 21:03:33 crc kubenswrapper[4886]: E0219 21:03:33.576370 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="6.4s" Feb 19 21:03:35 crc kubenswrapper[4886]: I0219 21:03:35.600128 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 21:03:35 crc kubenswrapper[4886]: I0219 21:03:35.601866 4886 status_manager.go:851] "Failed to get status for pod" podUID="a3afcbe4-42fa-4910-9365-a0b6891fa49b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Feb 19 21:03:35 crc kubenswrapper[4886]: I0219 21:03:35.627359 4886 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0d348c50-0d89-4a53-8364-14d6d129cd03" Feb 19 21:03:35 crc kubenswrapper[4886]: I0219 21:03:35.627404 4886 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0d348c50-0d89-4a53-8364-14d6d129cd03" Feb 19 21:03:35 crc kubenswrapper[4886]: E0219 21:03:35.628059 4886 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 21:03:35 crc kubenswrapper[4886]: I0219 21:03:35.628615 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 21:03:36 crc kubenswrapper[4886]: I0219 21:03:36.234344 4886 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="e0e496cdfc107bf1f31a2d8e9ca20ef1a012be8e9dd26f8ba1018a07479df111" exitCode=0 Feb 19 21:03:36 crc kubenswrapper[4886]: I0219 21:03:36.234521 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"e0e496cdfc107bf1f31a2d8e9ca20ef1a012be8e9dd26f8ba1018a07479df111"} Feb 19 21:03:36 crc kubenswrapper[4886]: I0219 21:03:36.234872 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"04f1cc0ef9819c569bb369517a552e871dbf951aeba85200f6ff2c92be512550"} Feb 19 21:03:36 crc kubenswrapper[4886]: I0219 21:03:36.235319 4886 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0d348c50-0d89-4a53-8364-14d6d129cd03" Feb 19 21:03:36 crc kubenswrapper[4886]: I0219 21:03:36.235355 4886 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0d348c50-0d89-4a53-8364-14d6d129cd03" Feb 19 21:03:36 crc kubenswrapper[4886]: I0219 21:03:36.235729 4886 status_manager.go:851] "Failed to get status for pod" podUID="a3afcbe4-42fa-4910-9365-a0b6891fa49b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Feb 19 21:03:36 crc kubenswrapper[4886]: E0219 21:03:36.235897 4886 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 21:03:36 crc kubenswrapper[4886]: I0219 21:03:36.239754 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 21:03:36 crc kubenswrapper[4886]: I0219 21:03:36.239830 4886 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3" exitCode=1 Feb 19 21:03:36 crc kubenswrapper[4886]: I0219 21:03:36.239870 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3"} Feb 19 21:03:36 crc kubenswrapper[4886]: I0219 21:03:36.240412 4886 scope.go:117] "RemoveContainer" containerID="4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3" Feb 19 21:03:36 crc kubenswrapper[4886]: I0219 21:03:36.241112 4886 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Feb 19 21:03:36 crc kubenswrapper[4886]: I0219 21:03:36.241643 4886 status_manager.go:851] "Failed to get status for pod" podUID="a3afcbe4-42fa-4910-9365-a0b6891fa49b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Feb 19 21:03:36 crc kubenswrapper[4886]: I0219 21:03:36.306219 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 21:03:37 crc kubenswrapper[4886]: I0219 21:03:37.264192 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8d7844d635f425f4641d6001fb759140dc47de477ad23f8fe2cec912ae8585de"} Feb 19 21:03:37 crc kubenswrapper[4886]: I0219 21:03:37.265629 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"52702690638c3987e911db356ee29338bb372919958c0075bb7c0e09dace10af"} Feb 19 21:03:37 crc kubenswrapper[4886]: I0219 21:03:37.265796 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"23f562daa6a80ddcce7741f4409b8fbfa813377f973ad9fcd6e5071a7bbc9cda"} Feb 19 21:03:37 crc kubenswrapper[4886]: I0219 21:03:37.283360 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 21:03:37 crc kubenswrapper[4886]: I0219 21:03:37.283415 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a89738ee3e7bc021fddcc3ca5eea8eb3f8f9511340b9e6dc038bf72db46dd4af"} Feb 19 21:03:37 crc kubenswrapper[4886]: I0219 21:03:37.352637 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 21:03:37 crc kubenswrapper[4886]: I0219 21:03:37.357018 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 21:03:38 crc kubenswrapper[4886]: I0219 21:03:38.291881 4886 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0d348c50-0d89-4a53-8364-14d6d129cd03" Feb 19 21:03:38 crc kubenswrapper[4886]: I0219 21:03:38.292134 4886 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0d348c50-0d89-4a53-8364-14d6d129cd03" Feb 19 21:03:38 crc kubenswrapper[4886]: I0219 21:03:38.292085 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1353229f467a0652e72da14f929ef765901c57de46690e8f1f94d8d62fc4bbc3"} Feb 19 21:03:38 crc kubenswrapper[4886]: I0219 21:03:38.292257 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ae1f10d19df30b7324b7a423c1da228599b6a25f8a774ed0b6403b47ac6a7b34"} Feb 19 21:03:38 crc kubenswrapper[4886]: I0219 21:03:38.292310 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 21:03:38 crc kubenswrapper[4886]: I0219 21:03:38.292333 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 21:03:40 crc kubenswrapper[4886]: I0219 21:03:40.628937 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 21:03:40 crc kubenswrapper[4886]: I0219 21:03:40.628991 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 21:03:40 crc kubenswrapper[4886]: I0219 21:03:40.637317 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 21:03:43 crc kubenswrapper[4886]: I0219 21:03:43.300117 4886 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 21:03:43 crc kubenswrapper[4886]: I0219 21:03:43.318499 4886 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0d348c50-0d89-4a53-8364-14d6d129cd03" Feb 19 21:03:43 crc kubenswrapper[4886]: I0219 21:03:43.318527 4886 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0d348c50-0d89-4a53-8364-14d6d129cd03" Feb 19 21:03:43 crc kubenswrapper[4886]: I0219 21:03:43.322088 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 21:03:43 crc kubenswrapper[4886]: I0219 21:03:43.323857 4886 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c018c38b-4714-42c5-a6b0-c8800db207f0" Feb 19 21:03:44 crc kubenswrapper[4886]: I0219 21:03:44.324527 4886 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0d348c50-0d89-4a53-8364-14d6d129cd03" Feb 19 21:03:44 crc kubenswrapper[4886]: I0219 21:03:44.324570 4886 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0d348c50-0d89-4a53-8364-14d6d129cd03" Feb 19 21:03:50 crc kubenswrapper[4886]: I0219 21:03:50.622375 4886 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c018c38b-4714-42c5-a6b0-c8800db207f0" Feb 19 21:03:53 crc kubenswrapper[4886]: I0219 21:03:53.012970 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 21:03:53 crc kubenswrapper[4886]: I0219 21:03:53.074772 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 21:03:53 crc kubenswrapper[4886]: I0219 21:03:53.595328 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 21:03:53 crc kubenswrapper[4886]: I0219 21:03:53.614730 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 21:03:53 crc kubenswrapper[4886]: I0219 21:03:53.744627 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 21:03:53 crc kubenswrapper[4886]: I0219 21:03:53.972528 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 21:03:54 crc kubenswrapper[4886]: I0219 21:03:54.170007 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 21:03:54 crc kubenswrapper[4886]: I0219 21:03:54.180438 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 21:03:54 crc kubenswrapper[4886]: I0219 21:03:54.212792 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 21:03:54 crc kubenswrapper[4886]: I0219 21:03:54.396651 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 21:03:54 crc kubenswrapper[4886]: I0219 21:03:54.527109 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 21:03:54 crc kubenswrapper[4886]: I0219 21:03:54.583401 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 21:03:54 crc kubenswrapper[4886]: I0219 21:03:54.626721 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 21:03:54 crc kubenswrapper[4886]: I0219 21:03:54.657034 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 21:03:54 crc kubenswrapper[4886]: I0219 21:03:54.810559 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 21:03:54 crc kubenswrapper[4886]: I0219 21:03:54.972980 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 21:03:55 crc kubenswrapper[4886]: I0219 21:03:55.151132 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 21:03:55 crc kubenswrapper[4886]: I0219 21:03:55.240601 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 21:03:55 crc kubenswrapper[4886]: I0219 21:03:55.614179 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 21:03:55 crc kubenswrapper[4886]: I0219 21:03:55.680097 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 21:03:55 crc kubenswrapper[4886]: I0219 21:03:55.707409 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 21:03:55 crc kubenswrapper[4886]: I0219 21:03:55.721114 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 21:03:55 crc kubenswrapper[4886]: I0219 21:03:55.722699 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 21:03:55 crc kubenswrapper[4886]: I0219 21:03:55.946782 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 21:03:56 crc kubenswrapper[4886]: I0219 21:03:56.045607 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 21:03:56 crc kubenswrapper[4886]: I0219 21:03:56.136693 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 21:03:56 crc kubenswrapper[4886]: I0219 21:03:56.149720 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 21:03:56 crc kubenswrapper[4886]: I0219 21:03:56.169579 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 21:03:56 crc kubenswrapper[4886]: I0219 21:03:56.199058 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 21:03:56 crc kubenswrapper[4886]: I0219 21:03:56.216763 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 21:03:56 crc kubenswrapper[4886]: I0219 21:03:56.300470 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 21:03:56 crc kubenswrapper[4886]: I0219 21:03:56.455094 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 21:03:56 crc kubenswrapper[4886]: I0219 21:03:56.455093 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 21:03:56 crc kubenswrapper[4886]: I0219 21:03:56.551336 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 21:03:56 crc kubenswrapper[4886]: I0219 21:03:56.593027 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 21:03:56 crc kubenswrapper[4886]: I0219 21:03:56.596395 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 21:03:56 crc kubenswrapper[4886]: I0219 21:03:56.604329 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 21:03:56 crc kubenswrapper[4886]: I0219 21:03:56.645806 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 21:03:56 crc kubenswrapper[4886]: I0219 21:03:56.700302 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 21:03:56 crc kubenswrapper[4886]: I0219 21:03:56.967024 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 21:03:57 crc kubenswrapper[4886]: I0219 21:03:57.200741 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 21:03:57 crc kubenswrapper[4886]: I0219 21:03:57.355857 4886 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 21:03:57 crc kubenswrapper[4886]: I0219 21:03:57.362904 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 21:03:57 crc kubenswrapper[4886]: I0219 21:03:57.362978 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 21:03:57 crc kubenswrapper[4886]: I0219 21:03:57.364762 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 21:03:57 crc kubenswrapper[4886]: I0219 21:03:57.366961 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 21:03:57 crc kubenswrapper[4886]: I0219 21:03:57.385614 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.385590494 podStartE2EDuration="14.385590494s" podCreationTimestamp="2026-02-19 21:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:03:57.385503841 +0000 UTC m=+268.013346901" watchObservedRunningTime="2026-02-19 21:03:57.385590494 +0000 UTC m=+268.013433584" Feb 19 21:03:57 crc kubenswrapper[4886]: I0219 21:03:57.405183 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 21:03:57 crc kubenswrapper[4886]: I0219 21:03:57.421690 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 21:03:57 crc kubenswrapper[4886]: I0219 21:03:57.430383 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 21:03:57 crc kubenswrapper[4886]: I0219 21:03:57.440418 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 21:03:57 crc kubenswrapper[4886]: I0219 21:03:57.560547 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 21:03:57 crc kubenswrapper[4886]: I0219 21:03:57.584428 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 21:03:57 crc kubenswrapper[4886]: I0219 21:03:57.619451 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 21:03:57 crc kubenswrapper[4886]: I0219 21:03:57.726168 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 21:03:57 crc kubenswrapper[4886]: I0219 21:03:57.737809 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 21:03:57 crc kubenswrapper[4886]: I0219 21:03:57.789651 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 21:03:57 crc kubenswrapper[4886]: I0219 21:03:57.866882 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 21:03:57 crc kubenswrapper[4886]: I0219 21:03:57.890598 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 21:03:57 crc kubenswrapper[4886]: I0219 21:03:57.999067 4886 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 21:03:58 crc kubenswrapper[4886]: I0219 21:03:58.027994 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 21:03:58 crc kubenswrapper[4886]: I0219 21:03:58.117081 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 21:03:58 crc kubenswrapper[4886]: I0219 21:03:58.370456 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 21:03:58 crc kubenswrapper[4886]: I0219 21:03:58.372596 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 21:03:58 crc kubenswrapper[4886]: I0219 21:03:58.510985 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 21:03:58 crc kubenswrapper[4886]: I0219 21:03:58.742472 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 21:03:58 crc kubenswrapper[4886]: I0219 21:03:58.872745 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 21:03:58 crc kubenswrapper[4886]: I0219 21:03:58.914812 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 21:03:58 crc kubenswrapper[4886]: I0219 21:03:58.994129 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.017572 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.022635 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.065046 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.087489 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.151169 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.153535 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.175790 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.255800 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.287997 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.319919 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.410686 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.421654 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.425207 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.445772 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.484218 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.484394 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.505359 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.523946 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.553715 4886 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.559976 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.611446 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.617997 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.636719 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.656761 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.814210 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.894497 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.895757 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.915728 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 21:03:59 crc kubenswrapper[4886]: I0219 21:03:59.941206 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.035380 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.035719 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.069534 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c89wj"] Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.069754 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c89wj" podUID="ca71404c-eec8-471e-a7ab-89f4ee69b025" containerName="registry-server" containerID="cri-o://015a717a2f424398fe158e109985eb3c18e0fb2617b8ec7d48e023b069bff693" gracePeriod=30 Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.081351 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4xpm6"] Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.081576 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4xpm6" podUID="123f03da-c3b1-49dd-aec6-8fd547885851" containerName="registry-server" containerID="cri-o://4f68b473f6331b2638c3a14acb4ecb616cd7e7a199a990a91605c63c8cc8fb73" gracePeriod=30 Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.089707 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nqhlp"] Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.089924 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-nqhlp" podUID="8b5c741c-61db-4b35-997b-8edd406b5b01" containerName="marketplace-operator" containerID="cri-o://da907c0eadde67b6513c8a7680e3b2a2483adb025cd1b2740ff66a0500bb14b5" gracePeriod=30 Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.094470 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7qh8"] Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.095118 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.098946 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k7qh8" podUID="ffc2e69b-08e3-4d9d-b21d-755914032707" containerName="registry-server" containerID="cri-o://c0ed80b21aa1e078ea6a9cc72e46dd9bda17c8ea3b8f629bff65bea64549c574" gracePeriod=30 Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.101285 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.102758 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dgcgg"] Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.103028 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dgcgg" podUID="eafab696-8d58-4612-97af-abb4fea7dd97" containerName="registry-server" containerID="cri-o://8afcdc4f6e4995f4b0875488b207d7499a0ea8e241f8e6c52a01349eb51f93c4" gracePeriod=30 Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.183217 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.207733 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.227931 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.302225 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.365534 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.437703 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.443369 4886 generic.go:334] "Generic (PLEG): container finished" podID="ca71404c-eec8-471e-a7ab-89f4ee69b025" containerID="015a717a2f424398fe158e109985eb3c18e0fb2617b8ec7d48e023b069bff693" exitCode=0 Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.443452 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c89wj" event={"ID":"ca71404c-eec8-471e-a7ab-89f4ee69b025","Type":"ContainerDied","Data":"015a717a2f424398fe158e109985eb3c18e0fb2617b8ec7d48e023b069bff693"} Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.447117 4886 generic.go:334] "Generic (PLEG): container finished" podID="eafab696-8d58-4612-97af-abb4fea7dd97" containerID="8afcdc4f6e4995f4b0875488b207d7499a0ea8e241f8e6c52a01349eb51f93c4" exitCode=0 Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.447238 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgcgg" event={"ID":"eafab696-8d58-4612-97af-abb4fea7dd97","Type":"ContainerDied","Data":"8afcdc4f6e4995f4b0875488b207d7499a0ea8e241f8e6c52a01349eb51f93c4"} Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.460521 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.471878 4886 generic.go:334] "Generic (PLEG): container finished" podID="8b5c741c-61db-4b35-997b-8edd406b5b01" containerID="da907c0eadde67b6513c8a7680e3b2a2483adb025cd1b2740ff66a0500bb14b5" exitCode=0 Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.471956 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nqhlp" event={"ID":"8b5c741c-61db-4b35-997b-8edd406b5b01","Type":"ContainerDied","Data":"da907c0eadde67b6513c8a7680e3b2a2483adb025cd1b2740ff66a0500bb14b5"} Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.476844 4886 generic.go:334] "Generic (PLEG): container finished" podID="ffc2e69b-08e3-4d9d-b21d-755914032707" containerID="c0ed80b21aa1e078ea6a9cc72e46dd9bda17c8ea3b8f629bff65bea64549c574" exitCode=0 Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.476896 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7qh8" event={"ID":"ffc2e69b-08e3-4d9d-b21d-755914032707","Type":"ContainerDied","Data":"c0ed80b21aa1e078ea6a9cc72e46dd9bda17c8ea3b8f629bff65bea64549c574"} Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.480376 4886 generic.go:334] "Generic (PLEG): container finished" podID="123f03da-c3b1-49dd-aec6-8fd547885851" containerID="4f68b473f6331b2638c3a14acb4ecb616cd7e7a199a990a91605c63c8cc8fb73" exitCode=0 Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.480518 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xpm6" event={"ID":"123f03da-c3b1-49dd-aec6-8fd547885851","Type":"ContainerDied","Data":"4f68b473f6331b2638c3a14acb4ecb616cd7e7a199a990a91605c63c8cc8fb73"} Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.513748 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c89wj" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.582173 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.589500 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.590609 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.604951 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca71404c-eec8-471e-a7ab-89f4ee69b025-utilities\") pod \"ca71404c-eec8-471e-a7ab-89f4ee69b025\" (UID: \"ca71404c-eec8-471e-a7ab-89f4ee69b025\") " Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.605062 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfs8w\" (UniqueName: \"kubernetes.io/projected/ca71404c-eec8-471e-a7ab-89f4ee69b025-kube-api-access-gfs8w\") pod \"ca71404c-eec8-471e-a7ab-89f4ee69b025\" (UID: \"ca71404c-eec8-471e-a7ab-89f4ee69b025\") " Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.605094 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca71404c-eec8-471e-a7ab-89f4ee69b025-catalog-content\") pod \"ca71404c-eec8-471e-a7ab-89f4ee69b025\" (UID: \"ca71404c-eec8-471e-a7ab-89f4ee69b025\") " Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.606309 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca71404c-eec8-471e-a7ab-89f4ee69b025-utilities" (OuterVolumeSpecName: "utilities") pod "ca71404c-eec8-471e-a7ab-89f4ee69b025" (UID: "ca71404c-eec8-471e-a7ab-89f4ee69b025"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.612909 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca71404c-eec8-471e-a7ab-89f4ee69b025-kube-api-access-gfs8w" (OuterVolumeSpecName: "kube-api-access-gfs8w") pod "ca71404c-eec8-471e-a7ab-89f4ee69b025" (UID: "ca71404c-eec8-471e-a7ab-89f4ee69b025"). InnerVolumeSpecName "kube-api-access-gfs8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.613821 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.639090 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xpm6" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.645022 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.651349 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nqhlp" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.654125 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7qh8" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.657953 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dgcgg" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.688760 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca71404c-eec8-471e-a7ab-89f4ee69b025-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca71404c-eec8-471e-a7ab-89f4ee69b025" (UID: "ca71404c-eec8-471e-a7ab-89f4ee69b025"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.705678 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eafab696-8d58-4612-97af-abb4fea7dd97-utilities\") pod \"eafab696-8d58-4612-97af-abb4fea7dd97\" (UID: \"eafab696-8d58-4612-97af-abb4fea7dd97\") " Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.705717 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn9vf\" (UniqueName: \"kubernetes.io/projected/8b5c741c-61db-4b35-997b-8edd406b5b01-kube-api-access-qn9vf\") pod \"8b5c741c-61db-4b35-997b-8edd406b5b01\" (UID: \"8b5c741c-61db-4b35-997b-8edd406b5b01\") " Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.705738 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc2e69b-08e3-4d9d-b21d-755914032707-catalog-content\") pod \"ffc2e69b-08e3-4d9d-b21d-755914032707\" (UID: \"ffc2e69b-08e3-4d9d-b21d-755914032707\") " Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.705765 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eafab696-8d58-4612-97af-abb4fea7dd97-catalog-content\") pod \"eafab696-8d58-4612-97af-abb4fea7dd97\" (UID: \"eafab696-8d58-4612-97af-abb4fea7dd97\") " Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.705810 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8b5c741c-61db-4b35-997b-8edd406b5b01-marketplace-operator-metrics\") pod \"8b5c741c-61db-4b35-997b-8edd406b5b01\" (UID: \"8b5c741c-61db-4b35-997b-8edd406b5b01\") " Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.705834 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/123f03da-c3b1-49dd-aec6-8fd547885851-utilities\") pod \"123f03da-c3b1-49dd-aec6-8fd547885851\" (UID: \"123f03da-c3b1-49dd-aec6-8fd547885851\") " Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.705857 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/123f03da-c3b1-49dd-aec6-8fd547885851-catalog-content\") pod \"123f03da-c3b1-49dd-aec6-8fd547885851\" (UID: \"123f03da-c3b1-49dd-aec6-8fd547885851\") " Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.705888 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26thw\" (UniqueName: \"kubernetes.io/projected/123f03da-c3b1-49dd-aec6-8fd547885851-kube-api-access-26thw\") pod \"123f03da-c3b1-49dd-aec6-8fd547885851\" (UID: \"123f03da-c3b1-49dd-aec6-8fd547885851\") " Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.705907 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm5kz\" (UniqueName: \"kubernetes.io/projected/ffc2e69b-08e3-4d9d-b21d-755914032707-kube-api-access-lm5kz\") pod \"ffc2e69b-08e3-4d9d-b21d-755914032707\" (UID: \"ffc2e69b-08e3-4d9d-b21d-755914032707\") " Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.705922 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b5c741c-61db-4b35-997b-8edd406b5b01-marketplace-trusted-ca\") pod \"8b5c741c-61db-4b35-997b-8edd406b5b01\" (UID: \"8b5c741c-61db-4b35-997b-8edd406b5b01\") " Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.705943 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc2e69b-08e3-4d9d-b21d-755914032707-utilities\") pod \"ffc2e69b-08e3-4d9d-b21d-755914032707\" (UID: \"ffc2e69b-08e3-4d9d-b21d-755914032707\") " Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.705959 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psv2l\" (UniqueName: \"kubernetes.io/projected/eafab696-8d58-4612-97af-abb4fea7dd97-kube-api-access-psv2l\") pod \"eafab696-8d58-4612-97af-abb4fea7dd97\" (UID: \"eafab696-8d58-4612-97af-abb4fea7dd97\") " Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.706135 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca71404c-eec8-471e-a7ab-89f4ee69b025-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.706147 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfs8w\" (UniqueName: \"kubernetes.io/projected/ca71404c-eec8-471e-a7ab-89f4ee69b025-kube-api-access-gfs8w\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.706157 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca71404c-eec8-471e-a7ab-89f4ee69b025-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.710374 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eafab696-8d58-4612-97af-abb4fea7dd97-kube-api-access-psv2l" (OuterVolumeSpecName: "kube-api-access-psv2l") pod "eafab696-8d58-4612-97af-abb4fea7dd97" (UID: "eafab696-8d58-4612-97af-abb4fea7dd97"). InnerVolumeSpecName "kube-api-access-psv2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.713890 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b5c741c-61db-4b35-997b-8edd406b5b01-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "8b5c741c-61db-4b35-997b-8edd406b5b01" (UID: "8b5c741c-61db-4b35-997b-8edd406b5b01"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.714446 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/123f03da-c3b1-49dd-aec6-8fd547885851-utilities" (OuterVolumeSpecName: "utilities") pod "123f03da-c3b1-49dd-aec6-8fd547885851" (UID: "123f03da-c3b1-49dd-aec6-8fd547885851"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.715583 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b5c741c-61db-4b35-997b-8edd406b5b01-kube-api-access-qn9vf" (OuterVolumeSpecName: "kube-api-access-qn9vf") pod "8b5c741c-61db-4b35-997b-8edd406b5b01" (UID: "8b5c741c-61db-4b35-997b-8edd406b5b01"). InnerVolumeSpecName "kube-api-access-qn9vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.715710 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eafab696-8d58-4612-97af-abb4fea7dd97-utilities" (OuterVolumeSpecName: "utilities") pod "eafab696-8d58-4612-97af-abb4fea7dd97" (UID: "eafab696-8d58-4612-97af-abb4fea7dd97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.718077 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffc2e69b-08e3-4d9d-b21d-755914032707-kube-api-access-lm5kz" (OuterVolumeSpecName: "kube-api-access-lm5kz") pod "ffc2e69b-08e3-4d9d-b21d-755914032707" (UID: "ffc2e69b-08e3-4d9d-b21d-755914032707"). InnerVolumeSpecName "kube-api-access-lm5kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.719319 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffc2e69b-08e3-4d9d-b21d-755914032707-utilities" (OuterVolumeSpecName: "utilities") pod "ffc2e69b-08e3-4d9d-b21d-755914032707" (UID: "ffc2e69b-08e3-4d9d-b21d-755914032707"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.720195 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b5c741c-61db-4b35-997b-8edd406b5b01-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "8b5c741c-61db-4b35-997b-8edd406b5b01" (UID: "8b5c741c-61db-4b35-997b-8edd406b5b01"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.720379 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.721606 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/123f03da-c3b1-49dd-aec6-8fd547885851-kube-api-access-26thw" (OuterVolumeSpecName: "kube-api-access-26thw") pod "123f03da-c3b1-49dd-aec6-8fd547885851" (UID: "123f03da-c3b1-49dd-aec6-8fd547885851"). InnerVolumeSpecName "kube-api-access-26thw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.733676 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.750908 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffc2e69b-08e3-4d9d-b21d-755914032707-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffc2e69b-08e3-4d9d-b21d-755914032707" (UID: "ffc2e69b-08e3-4d9d-b21d-755914032707"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.757036 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.757653 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.774456 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/123f03da-c3b1-49dd-aec6-8fd547885851-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "123f03da-c3b1-49dd-aec6-8fd547885851" (UID: "123f03da-c3b1-49dd-aec6-8fd547885851"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.806871 4886 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8b5c741c-61db-4b35-997b-8edd406b5b01-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.806907 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/123f03da-c3b1-49dd-aec6-8fd547885851-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.806916 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/123f03da-c3b1-49dd-aec6-8fd547885851-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.806929 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26thw\" (UniqueName: \"kubernetes.io/projected/123f03da-c3b1-49dd-aec6-8fd547885851-kube-api-access-26thw\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.806939 4886 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b5c741c-61db-4b35-997b-8edd406b5b01-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.806947 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm5kz\" (UniqueName: \"kubernetes.io/projected/ffc2e69b-08e3-4d9d-b21d-755914032707-kube-api-access-lm5kz\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.806982 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc2e69b-08e3-4d9d-b21d-755914032707-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.806991 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psv2l\" (UniqueName: \"kubernetes.io/projected/eafab696-8d58-4612-97af-abb4fea7dd97-kube-api-access-psv2l\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.806999 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eafab696-8d58-4612-97af-abb4fea7dd97-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.807008 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn9vf\" (UniqueName: \"kubernetes.io/projected/8b5c741c-61db-4b35-997b-8edd406b5b01-kube-api-access-qn9vf\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.807016 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc2e69b-08e3-4d9d-b21d-755914032707-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.807159 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.850402 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.851742 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eafab696-8d58-4612-97af-abb4fea7dd97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eafab696-8d58-4612-97af-abb4fea7dd97" (UID: "eafab696-8d58-4612-97af-abb4fea7dd97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.862376 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 21:04:00 crc kubenswrapper[4886]: I0219 21:04:00.908626 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eafab696-8d58-4612-97af-abb4fea7dd97-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.150445 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.202286 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.205521 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.301702 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.438666 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.487935 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nqhlp" event={"ID":"8b5c741c-61db-4b35-997b-8edd406b5b01","Type":"ContainerDied","Data":"9c431e15080ab3017fb7d52960ddb41e68c2380bbd65ec92c2ef9ac5551cb70b"} Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.488005 4886 scope.go:117] "RemoveContainer" containerID="da907c0eadde67b6513c8a7680e3b2a2483adb025cd1b2740ff66a0500bb14b5" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.488149 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nqhlp" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.499790 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7qh8" event={"ID":"ffc2e69b-08e3-4d9d-b21d-755914032707","Type":"ContainerDied","Data":"9a53e036e29509dd05df65dff7d2b09e3cd5b26a5591e5a8040e50cef3d87c53"} Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.499820 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7qh8" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.504646 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xpm6" event={"ID":"123f03da-c3b1-49dd-aec6-8fd547885851","Type":"ContainerDied","Data":"99aeeaaf526436a99f5c88d6b08e79a4ac45f4ad4ccd8d8f078fe827f7cee522"} Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.504731 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xpm6" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.519043 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.520497 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c89wj" event={"ID":"ca71404c-eec8-471e-a7ab-89f4ee69b025","Type":"ContainerDied","Data":"a7194a8b0fa540e78ac178fa1d410599d3a7c8067a0ac0029458526ffd6e3d67"} Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.520582 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c89wj" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.522290 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgcgg" event={"ID":"eafab696-8d58-4612-97af-abb4fea7dd97","Type":"ContainerDied","Data":"9090515f9202b3732383afad462069ec2ed94760cfbdc6cc3071f93801781891"} Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.522383 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dgcgg" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.526217 4886 scope.go:117] "RemoveContainer" containerID="c0ed80b21aa1e078ea6a9cc72e46dd9bda17c8ea3b8f629bff65bea64549c574" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.540600 4886 scope.go:117] "RemoveContainer" containerID="84e114b6702a29d37057bb3f083695d32ca7c5c942fcbd3bd8897a9a2aa94371" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.542654 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nqhlp"] Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.553234 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nqhlp"] Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.554791 4886 scope.go:117] "RemoveContainer" containerID="7b92d950939d798b595a45955a1f5b1011abf038889d11b3a34d5af0319424aa" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.564481 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4xpm6"] Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.568872 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4xpm6"] Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.581388 4886 scope.go:117] "RemoveContainer" containerID="4f68b473f6331b2638c3a14acb4ecb616cd7e7a199a990a91605c63c8cc8fb73" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.582404 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7qh8"] Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.583489 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.588047 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7qh8"] Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.591171 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c89wj"] Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.594919 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c89wj"] Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.605214 4886 scope.go:117] "RemoveContainer" containerID="d93d5a0f681c56cec1fe2ab84feb6cd600fc5099ed0fd1d8cf1ceb0c904a2e74" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.609590 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dgcgg"] Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.612623 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dgcgg"] Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.642371 4886 scope.go:117] "RemoveContainer" containerID="f5b030c5efaf602ff7db5298cc8a544cdbd095ac6315628fbe17f72df0a713d8" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.648083 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.657163 4886 scope.go:117] "RemoveContainer" containerID="015a717a2f424398fe158e109985eb3c18e0fb2617b8ec7d48e023b069bff693" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.658753 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.702177 4886 scope.go:117] "RemoveContainer" containerID="8007d7f3b6d6ae53e2ce58ef91ab7b715a791c3307f0820549748705ab735649" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.720387 4886 scope.go:117] "RemoveContainer" containerID="9ea873c1a04e2611412158e05f7c0b0e49df068ecc21256826fbb51d11c24e8c" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.734368 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.737121 4886 scope.go:117] "RemoveContainer" containerID="8afcdc4f6e4995f4b0875488b207d7499a0ea8e241f8e6c52a01349eb51f93c4" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.750459 4886 scope.go:117] "RemoveContainer" containerID="53c501125ee63bb72a63e31e6f410403463b1da758a0f10fa04bc1be9994cee6" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.762118 4886 scope.go:117] "RemoveContainer" containerID="f85bf342f1a896a91a3c11ef6b2432726ba7b8ca8a5bbf7f65d3b9e0630db02f" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.856461 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.915387 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.925523 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.946178 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 21:04:01 crc kubenswrapper[4886]: I0219 21:04:01.983723 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.052743 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.106200 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.108594 4886 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.236331 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.329925 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.340856 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.378868 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.389454 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.411465 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.440910 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.512353 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.514131 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.598195 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.607442 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.610489 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="123f03da-c3b1-49dd-aec6-8fd547885851" path="/var/lib/kubelet/pods/123f03da-c3b1-49dd-aec6-8fd547885851/volumes" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.612165 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b5c741c-61db-4b35-997b-8edd406b5b01" path="/var/lib/kubelet/pods/8b5c741c-61db-4b35-997b-8edd406b5b01/volumes" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.613111 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca71404c-eec8-471e-a7ab-89f4ee69b025" path="/var/lib/kubelet/pods/ca71404c-eec8-471e-a7ab-89f4ee69b025/volumes" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.615417 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eafab696-8d58-4612-97af-abb4fea7dd97" path="/var/lib/kubelet/pods/eafab696-8d58-4612-97af-abb4fea7dd97/volumes" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.617006 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffc2e69b-08e3-4d9d-b21d-755914032707" path="/var/lib/kubelet/pods/ffc2e69b-08e3-4d9d-b21d-755914032707/volumes" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.623071 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.635726 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.676272 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.775949 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.784488 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.824284 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.929122 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 21:04:02 crc kubenswrapper[4886]: I0219 21:04:02.933670 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 21:04:03 crc kubenswrapper[4886]: I0219 21:04:03.045045 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 21:04:03 crc kubenswrapper[4886]: I0219 21:04:03.047987 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 21:04:03 crc kubenswrapper[4886]: I0219 21:04:03.050594 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 21:04:03 crc kubenswrapper[4886]: I0219 21:04:03.077413 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 21:04:03 crc kubenswrapper[4886]: I0219 21:04:03.079707 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 21:04:03 crc kubenswrapper[4886]: I0219 21:04:03.090501 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 21:04:03 crc kubenswrapper[4886]: I0219 21:04:03.201702 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 21:04:03 crc kubenswrapper[4886]: I0219 21:04:03.217698 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 21:04:03 crc kubenswrapper[4886]: I0219 21:04:03.237833 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 21:04:03 crc kubenswrapper[4886]: I0219 21:04:03.263785 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 21:04:03 crc kubenswrapper[4886]: I0219 21:04:03.325559 4886 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 21:04:03 crc kubenswrapper[4886]: I0219 21:04:03.371671 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 21:04:03 crc kubenswrapper[4886]: I0219 21:04:03.541814 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 21:04:03 crc kubenswrapper[4886]: I0219 21:04:03.569417 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 21:04:03 crc kubenswrapper[4886]: I0219 21:04:03.644507 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 21:04:03 crc kubenswrapper[4886]: I0219 21:04:03.702960 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 21:04:03 crc kubenswrapper[4886]: I0219 21:04:03.716980 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 21:04:03 crc kubenswrapper[4886]: I0219 21:04:03.734288 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 21:04:03 crc kubenswrapper[4886]: I0219 21:04:03.778072 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 21:04:03 crc kubenswrapper[4886]: I0219 21:04:03.797030 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 21:04:03 crc kubenswrapper[4886]: I0219 21:04:03.807526 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 21:04:03 crc kubenswrapper[4886]: I0219 21:04:03.822554 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 21:04:03 crc kubenswrapper[4886]: I0219 21:04:03.840522 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.026638 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.032551 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.066284 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.133517 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.342348 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.369125 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8nnq6"] Feb 19 21:04:04 crc kubenswrapper[4886]: E0219 21:04:04.369318 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc2e69b-08e3-4d9d-b21d-755914032707" containerName="extract-content" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.369330 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc2e69b-08e3-4d9d-b21d-755914032707" containerName="extract-content" Feb 19 21:04:04 crc kubenswrapper[4886]: E0219 21:04:04.369340 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3afcbe4-42fa-4910-9365-a0b6891fa49b" containerName="installer" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.369346 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3afcbe4-42fa-4910-9365-a0b6891fa49b" containerName="installer" Feb 19 21:04:04 crc kubenswrapper[4886]: E0219 21:04:04.369355 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eafab696-8d58-4612-97af-abb4fea7dd97" containerName="extract-content" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.369362 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="eafab696-8d58-4612-97af-abb4fea7dd97" containerName="extract-content" Feb 19 21:04:04 crc kubenswrapper[4886]: E0219 21:04:04.369369 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eafab696-8d58-4612-97af-abb4fea7dd97" containerName="registry-server" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.369375 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="eafab696-8d58-4612-97af-abb4fea7dd97" containerName="registry-server" Feb 19 21:04:04 crc kubenswrapper[4886]: E0219 21:04:04.369384 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eafab696-8d58-4612-97af-abb4fea7dd97" containerName="extract-utilities" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.369389 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="eafab696-8d58-4612-97af-abb4fea7dd97" containerName="extract-utilities" Feb 19 21:04:04 crc kubenswrapper[4886]: E0219 21:04:04.369397 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="123f03da-c3b1-49dd-aec6-8fd547885851" containerName="extract-utilities" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.369403 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="123f03da-c3b1-49dd-aec6-8fd547885851" containerName="extract-utilities" Feb 19 21:04:04 crc kubenswrapper[4886]: E0219 21:04:04.369409 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc2e69b-08e3-4d9d-b21d-755914032707" containerName="registry-server" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.369414 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc2e69b-08e3-4d9d-b21d-755914032707" containerName="registry-server" Feb 19 21:04:04 crc kubenswrapper[4886]: E0219 21:04:04.369422 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="123f03da-c3b1-49dd-aec6-8fd547885851" containerName="extract-content" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.369432 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="123f03da-c3b1-49dd-aec6-8fd547885851" containerName="extract-content" Feb 19 21:04:04 crc kubenswrapper[4886]: E0219 21:04:04.369445 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc2e69b-08e3-4d9d-b21d-755914032707" containerName="extract-utilities" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.369453 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc2e69b-08e3-4d9d-b21d-755914032707" containerName="extract-utilities" Feb 19 21:04:04 crc kubenswrapper[4886]: E0219 21:04:04.369466 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca71404c-eec8-471e-a7ab-89f4ee69b025" containerName="extract-content" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.369474 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca71404c-eec8-471e-a7ab-89f4ee69b025" containerName="extract-content" Feb 19 21:04:04 crc kubenswrapper[4886]: E0219 21:04:04.369488 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca71404c-eec8-471e-a7ab-89f4ee69b025" containerName="registry-server" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.369495 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca71404c-eec8-471e-a7ab-89f4ee69b025" containerName="registry-server" Feb 19 21:04:04 crc kubenswrapper[4886]: E0219 21:04:04.369502 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca71404c-eec8-471e-a7ab-89f4ee69b025" containerName="extract-utilities" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.369507 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca71404c-eec8-471e-a7ab-89f4ee69b025" containerName="extract-utilities" Feb 19 21:04:04 crc kubenswrapper[4886]: E0219 21:04:04.369515 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="123f03da-c3b1-49dd-aec6-8fd547885851" containerName="registry-server" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.369520 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="123f03da-c3b1-49dd-aec6-8fd547885851" containerName="registry-server" Feb 19 21:04:04 crc kubenswrapper[4886]: E0219 21:04:04.369528 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b5c741c-61db-4b35-997b-8edd406b5b01" containerName="marketplace-operator" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.369534 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5c741c-61db-4b35-997b-8edd406b5b01" containerName="marketplace-operator" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.369613 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b5c741c-61db-4b35-997b-8edd406b5b01" containerName="marketplace-operator" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.369624 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3afcbe4-42fa-4910-9365-a0b6891fa49b" containerName="installer" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.369631 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffc2e69b-08e3-4d9d-b21d-755914032707" containerName="registry-server" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.369641 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca71404c-eec8-471e-a7ab-89f4ee69b025" containerName="registry-server" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.369648 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="eafab696-8d58-4612-97af-abb4fea7dd97" containerName="registry-server" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.369654 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="123f03da-c3b1-49dd-aec6-8fd547885851" containerName="registry-server" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.370005 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8nnq6" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.372414 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.374233 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.374366 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.374429 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.380464 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.388602 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8nnq6"] Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.423064 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.457751 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.463687 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.478439 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4f34ced6-828e-4337-8aad-b2ce35c35793-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8nnq6\" (UID: \"4f34ced6-828e-4337-8aad-b2ce35c35793\") " pod="openshift-marketplace/marketplace-operator-79b997595-8nnq6" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.478502 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rxkz\" (UniqueName: \"kubernetes.io/projected/4f34ced6-828e-4337-8aad-b2ce35c35793-kube-api-access-5rxkz\") pod \"marketplace-operator-79b997595-8nnq6\" (UID: \"4f34ced6-828e-4337-8aad-b2ce35c35793\") " pod="openshift-marketplace/marketplace-operator-79b997595-8nnq6" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.478663 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4f34ced6-828e-4337-8aad-b2ce35c35793-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8nnq6\" (UID: \"4f34ced6-828e-4337-8aad-b2ce35c35793\") " pod="openshift-marketplace/marketplace-operator-79b997595-8nnq6" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.492903 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.494307 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.541488 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.580596 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4f34ced6-828e-4337-8aad-b2ce35c35793-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8nnq6\" (UID: \"4f34ced6-828e-4337-8aad-b2ce35c35793\") " pod="openshift-marketplace/marketplace-operator-79b997595-8nnq6" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.580659 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rxkz\" (UniqueName: \"kubernetes.io/projected/4f34ced6-828e-4337-8aad-b2ce35c35793-kube-api-access-5rxkz\") pod \"marketplace-operator-79b997595-8nnq6\" (UID: \"4f34ced6-828e-4337-8aad-b2ce35c35793\") " pod="openshift-marketplace/marketplace-operator-79b997595-8nnq6" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.580689 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4f34ced6-828e-4337-8aad-b2ce35c35793-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8nnq6\" (UID: \"4f34ced6-828e-4337-8aad-b2ce35c35793\") " pod="openshift-marketplace/marketplace-operator-79b997595-8nnq6" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.581981 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4f34ced6-828e-4337-8aad-b2ce35c35793-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8nnq6\" (UID: \"4f34ced6-828e-4337-8aad-b2ce35c35793\") " pod="openshift-marketplace/marketplace-operator-79b997595-8nnq6" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.590318 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4f34ced6-828e-4337-8aad-b2ce35c35793-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8nnq6\" (UID: \"4f34ced6-828e-4337-8aad-b2ce35c35793\") " pod="openshift-marketplace/marketplace-operator-79b997595-8nnq6" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.610828 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rxkz\" (UniqueName: \"kubernetes.io/projected/4f34ced6-828e-4337-8aad-b2ce35c35793-kube-api-access-5rxkz\") pod \"marketplace-operator-79b997595-8nnq6\" (UID: \"4f34ced6-828e-4337-8aad-b2ce35c35793\") " pod="openshift-marketplace/marketplace-operator-79b997595-8nnq6" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.665364 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.680099 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.730616 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8nnq6" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.761671 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.788692 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.906753 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 21:04:04 crc kubenswrapper[4886]: I0219 21:04:04.995450 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.021131 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.062696 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.071744 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.120849 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.145674 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.169644 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8nnq6"] Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.214146 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.366362 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.372752 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.383740 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.475831 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.550488 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8nnq6" event={"ID":"4f34ced6-828e-4337-8aad-b2ce35c35793","Type":"ContainerStarted","Data":"e6247a6af0276579972c5a9f18e24830d697df0f1fa6e16713dff09885abcc7a"} Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.550530 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8nnq6" event={"ID":"4f34ced6-828e-4337-8aad-b2ce35c35793","Type":"ContainerStarted","Data":"a15b7b8d0156f3498ff4ac45364b4cd8b6fcde183a04720e6a238b54b52eda95"} Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.550722 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8nnq6" Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.552149 4886 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8nnq6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.552192 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8nnq6" podUID="4f34ced6-828e-4337-8aad-b2ce35c35793" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.569731 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-8nnq6" podStartSLOduration=5.569716863 podStartE2EDuration="5.569716863s" podCreationTimestamp="2026-02-19 21:04:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:04:05.568124143 +0000 UTC m=+276.195967193" watchObservedRunningTime="2026-02-19 21:04:05.569716863 +0000 UTC m=+276.197559913" Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.694956 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.723869 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.760087 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.761626 4886 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.761828 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://b97c8cf3921060b4929e3266bba171b48f0dc69f36c09b73b6fa642aed0a07e6" gracePeriod=5 Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.762488 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.791138 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.854247 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.916907 4886 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 21:04:05 crc kubenswrapper[4886]: I0219 21:04:05.920209 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 21:04:06 crc kubenswrapper[4886]: I0219 21:04:06.080687 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 21:04:06 crc kubenswrapper[4886]: I0219 21:04:06.099804 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 21:04:06 crc kubenswrapper[4886]: I0219 21:04:06.169188 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 21:04:06 crc kubenswrapper[4886]: I0219 21:04:06.264018 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 21:04:06 crc kubenswrapper[4886]: I0219 21:04:06.411962 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 21:04:06 crc kubenswrapper[4886]: I0219 21:04:06.559323 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8nnq6" Feb 19 21:04:06 crc kubenswrapper[4886]: I0219 21:04:06.614422 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 21:04:06 crc kubenswrapper[4886]: I0219 21:04:06.631774 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 21:04:06 crc kubenswrapper[4886]: I0219 21:04:06.683329 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 21:04:06 crc kubenswrapper[4886]: I0219 21:04:06.721593 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 21:04:06 crc kubenswrapper[4886]: I0219 21:04:06.740911 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 21:04:06 crc kubenswrapper[4886]: I0219 21:04:06.790920 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 21:04:06 crc kubenswrapper[4886]: I0219 21:04:06.791405 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 21:04:06 crc kubenswrapper[4886]: I0219 21:04:06.891241 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 21:04:06 crc kubenswrapper[4886]: I0219 21:04:06.968964 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 21:04:07 crc kubenswrapper[4886]: I0219 21:04:07.014434 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 21:04:07 crc kubenswrapper[4886]: I0219 21:04:07.023325 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 21:04:07 crc kubenswrapper[4886]: I0219 21:04:07.041439 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 21:04:07 crc kubenswrapper[4886]: I0219 21:04:07.160473 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 21:04:07 crc kubenswrapper[4886]: I0219 21:04:07.202808 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 21:04:07 crc kubenswrapper[4886]: I0219 21:04:07.235555 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 21:04:07 crc kubenswrapper[4886]: I0219 21:04:07.238115 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 21:04:07 crc kubenswrapper[4886]: I0219 21:04:07.306912 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 21:04:07 crc kubenswrapper[4886]: I0219 21:04:07.309677 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 21:04:07 crc kubenswrapper[4886]: I0219 21:04:07.417724 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 21:04:07 crc kubenswrapper[4886]: I0219 21:04:07.450504 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 21:04:07 crc kubenswrapper[4886]: I0219 21:04:07.618223 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 21:04:07 crc kubenswrapper[4886]: I0219 21:04:07.738973 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 21:04:07 crc kubenswrapper[4886]: I0219 21:04:07.800814 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 21:04:07 crc kubenswrapper[4886]: I0219 21:04:07.828428 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 21:04:08 crc kubenswrapper[4886]: I0219 21:04:08.129639 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 21:04:08 crc kubenswrapper[4886]: I0219 21:04:08.431524 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 21:04:08 crc kubenswrapper[4886]: I0219 21:04:08.438992 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 21:04:08 crc kubenswrapper[4886]: I0219 21:04:08.648667 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 21:04:08 crc kubenswrapper[4886]: I0219 21:04:08.777955 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 21:04:08 crc kubenswrapper[4886]: I0219 21:04:08.838769 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 21:04:10 crc kubenswrapper[4886]: I0219 21:04:10.934951 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 21:04:11 crc kubenswrapper[4886]: I0219 21:04:11.325144 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 21:04:11 crc kubenswrapper[4886]: I0219 21:04:11.325233 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 21:04:11 crc kubenswrapper[4886]: I0219 21:04:11.473603 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 21:04:11 crc kubenswrapper[4886]: I0219 21:04:11.473652 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 21:04:11 crc kubenswrapper[4886]: I0219 21:04:11.473685 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 21:04:11 crc kubenswrapper[4886]: I0219 21:04:11.473723 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 21:04:11 crc kubenswrapper[4886]: I0219 21:04:11.473750 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 21:04:11 crc kubenswrapper[4886]: I0219 21:04:11.473770 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:04:11 crc kubenswrapper[4886]: I0219 21:04:11.473790 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:04:11 crc kubenswrapper[4886]: I0219 21:04:11.473830 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:04:11 crc kubenswrapper[4886]: I0219 21:04:11.473920 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:04:11 crc kubenswrapper[4886]: I0219 21:04:11.473963 4886 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:11 crc kubenswrapper[4886]: I0219 21:04:11.473973 4886 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:11 crc kubenswrapper[4886]: I0219 21:04:11.473983 4886 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:11 crc kubenswrapper[4886]: I0219 21:04:11.482596 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:04:11 crc kubenswrapper[4886]: I0219 21:04:11.575306 4886 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:11 crc kubenswrapper[4886]: I0219 21:04:11.575596 4886 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:11 crc kubenswrapper[4886]: I0219 21:04:11.583300 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 21:04:11 crc kubenswrapper[4886]: I0219 21:04:11.583364 4886 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="b97c8cf3921060b4929e3266bba171b48f0dc69f36c09b73b6fa642aed0a07e6" exitCode=137 Feb 19 21:04:11 crc kubenswrapper[4886]: I0219 21:04:11.583418 4886 scope.go:117] "RemoveContainer" containerID="b97c8cf3921060b4929e3266bba171b48f0dc69f36c09b73b6fa642aed0a07e6" Feb 19 21:04:11 crc kubenswrapper[4886]: I0219 21:04:11.583467 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 21:04:11 crc kubenswrapper[4886]: I0219 21:04:11.602321 4886 scope.go:117] "RemoveContainer" containerID="b97c8cf3921060b4929e3266bba171b48f0dc69f36c09b73b6fa642aed0a07e6" Feb 19 21:04:11 crc kubenswrapper[4886]: E0219 21:04:11.602703 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b97c8cf3921060b4929e3266bba171b48f0dc69f36c09b73b6fa642aed0a07e6\": container with ID starting with b97c8cf3921060b4929e3266bba171b48f0dc69f36c09b73b6fa642aed0a07e6 not found: ID does not exist" containerID="b97c8cf3921060b4929e3266bba171b48f0dc69f36c09b73b6fa642aed0a07e6" Feb 19 21:04:11 crc kubenswrapper[4886]: I0219 21:04:11.602764 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b97c8cf3921060b4929e3266bba171b48f0dc69f36c09b73b6fa642aed0a07e6"} err="failed to get container status \"b97c8cf3921060b4929e3266bba171b48f0dc69f36c09b73b6fa642aed0a07e6\": rpc error: code = NotFound desc = could not find container \"b97c8cf3921060b4929e3266bba171b48f0dc69f36c09b73b6fa642aed0a07e6\": container with ID starting with b97c8cf3921060b4929e3266bba171b48f0dc69f36c09b73b6fa642aed0a07e6 not found: ID does not exist" Feb 19 21:04:12 crc kubenswrapper[4886]: I0219 21:04:12.608362 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.000728 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-q85bd"] Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.001926 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" podUID="14f032d2-80a3-4c8a-a5c8-b82764dc2f18" containerName="controller-manager" containerID="cri-o://fff01ee8edebbbf0f57a3bbad6d323173505289f196876a2e22e0953e6f4555d" gracePeriod=30 Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.096443 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd"] Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.096749 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd" podUID="1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723" containerName="route-controller-manager" containerID="cri-o://3351990ec0250b60867c0176f6e8860ad5ac8a74447a35022d9923cd1147f866" gracePeriod=30 Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.372508 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.461430 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.476384 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kk4q2\" (UniqueName: \"kubernetes.io/projected/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-kube-api-access-kk4q2\") pod \"14f032d2-80a3-4c8a-a5c8-b82764dc2f18\" (UID: \"14f032d2-80a3-4c8a-a5c8-b82764dc2f18\") " Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.476433 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-config\") pod \"14f032d2-80a3-4c8a-a5c8-b82764dc2f18\" (UID: \"14f032d2-80a3-4c8a-a5c8-b82764dc2f18\") " Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.476469 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-serving-cert\") pod \"14f032d2-80a3-4c8a-a5c8-b82764dc2f18\" (UID: \"14f032d2-80a3-4c8a-a5c8-b82764dc2f18\") " Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.476509 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-client-ca\") pod \"14f032d2-80a3-4c8a-a5c8-b82764dc2f18\" (UID: \"14f032d2-80a3-4c8a-a5c8-b82764dc2f18\") " Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.476535 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-proxy-ca-bundles\") pod \"14f032d2-80a3-4c8a-a5c8-b82764dc2f18\" (UID: \"14f032d2-80a3-4c8a-a5c8-b82764dc2f18\") " Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.477567 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "14f032d2-80a3-4c8a-a5c8-b82764dc2f18" (UID: "14f032d2-80a3-4c8a-a5c8-b82764dc2f18"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.477713 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-client-ca" (OuterVolumeSpecName: "client-ca") pod "14f032d2-80a3-4c8a-a5c8-b82764dc2f18" (UID: "14f032d2-80a3-4c8a-a5c8-b82764dc2f18"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.484026 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "14f032d2-80a3-4c8a-a5c8-b82764dc2f18" (UID: "14f032d2-80a3-4c8a-a5c8-b82764dc2f18"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.484131 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-kube-api-access-kk4q2" (OuterVolumeSpecName: "kube-api-access-kk4q2") pod "14f032d2-80a3-4c8a-a5c8-b82764dc2f18" (UID: "14f032d2-80a3-4c8a-a5c8-b82764dc2f18"). InnerVolumeSpecName "kube-api-access-kk4q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.484826 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-config" (OuterVolumeSpecName: "config") pod "14f032d2-80a3-4c8a-a5c8-b82764dc2f18" (UID: "14f032d2-80a3-4c8a-a5c8-b82764dc2f18"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.577724 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx74p\" (UniqueName: \"kubernetes.io/projected/1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723-kube-api-access-tx74p\") pod \"1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723\" (UID: \"1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723\") " Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.577830 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723-config\") pod \"1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723\" (UID: \"1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723\") " Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.577882 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723-client-ca\") pod \"1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723\" (UID: \"1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723\") " Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.577919 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723-serving-cert\") pod \"1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723\" (UID: \"1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723\") " Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.578244 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.578290 4886 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.578308 4886 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.578326 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.578343 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kk4q2\" (UniqueName: \"kubernetes.io/projected/14f032d2-80a3-4c8a-a5c8-b82764dc2f18-kube-api-access-kk4q2\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.579962 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723-client-ca" (OuterVolumeSpecName: "client-ca") pod "1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723" (UID: "1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.580179 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723-config" (OuterVolumeSpecName: "config") pod "1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723" (UID: "1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.583546 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723" (UID: "1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.583832 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723-kube-api-access-tx74p" (OuterVolumeSpecName: "kube-api-access-tx74p") pod "1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723" (UID: "1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723"). InnerVolumeSpecName "kube-api-access-tx74p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.680551 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx74p\" (UniqueName: \"kubernetes.io/projected/1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723-kube-api-access-tx74p\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.680606 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.680623 4886 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.680640 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.695847 4886 generic.go:334] "Generic (PLEG): container finished" podID="14f032d2-80a3-4c8a-a5c8-b82764dc2f18" containerID="fff01ee8edebbbf0f57a3bbad6d323173505289f196876a2e22e0953e6f4555d" exitCode=0 Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.695903 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" event={"ID":"14f032d2-80a3-4c8a-a5c8-b82764dc2f18","Type":"ContainerDied","Data":"fff01ee8edebbbf0f57a3bbad6d323173505289f196876a2e22e0953e6f4555d"} Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.695954 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.696416 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-q85bd" event={"ID":"14f032d2-80a3-4c8a-a5c8-b82764dc2f18","Type":"ContainerDied","Data":"3c1de9a087077a196ad09c487a0e820cde092a7d350945867a8d1e04ffb974ac"} Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.696428 4886 scope.go:117] "RemoveContainer" containerID="fff01ee8edebbbf0f57a3bbad6d323173505289f196876a2e22e0953e6f4555d" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.698080 4886 generic.go:334] "Generic (PLEG): container finished" podID="1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723" containerID="3351990ec0250b60867c0176f6e8860ad5ac8a74447a35022d9923cd1147f866" exitCode=0 Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.698285 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd" event={"ID":"1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723","Type":"ContainerDied","Data":"3351990ec0250b60867c0176f6e8860ad5ac8a74447a35022d9923cd1147f866"} Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.698385 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd" event={"ID":"1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723","Type":"ContainerDied","Data":"5df2591e59bb5c0e66a6e1b8970f9655cb31518d9e16fa47e7749f7b47263f5b"} Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.698579 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.718440 4886 scope.go:117] "RemoveContainer" containerID="fff01ee8edebbbf0f57a3bbad6d323173505289f196876a2e22e0953e6f4555d" Feb 19 21:04:28 crc kubenswrapper[4886]: E0219 21:04:28.720930 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fff01ee8edebbbf0f57a3bbad6d323173505289f196876a2e22e0953e6f4555d\": container with ID starting with fff01ee8edebbbf0f57a3bbad6d323173505289f196876a2e22e0953e6f4555d not found: ID does not exist" containerID="fff01ee8edebbbf0f57a3bbad6d323173505289f196876a2e22e0953e6f4555d" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.720981 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fff01ee8edebbbf0f57a3bbad6d323173505289f196876a2e22e0953e6f4555d"} err="failed to get container status \"fff01ee8edebbbf0f57a3bbad6d323173505289f196876a2e22e0953e6f4555d\": rpc error: code = NotFound desc = could not find container \"fff01ee8edebbbf0f57a3bbad6d323173505289f196876a2e22e0953e6f4555d\": container with ID starting with fff01ee8edebbbf0f57a3bbad6d323173505289f196876a2e22e0953e6f4555d not found: ID does not exist" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.721018 4886 scope.go:117] "RemoveContainer" containerID="3351990ec0250b60867c0176f6e8860ad5ac8a74447a35022d9923cd1147f866" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.725432 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd"] Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.728440 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lk9fd"] Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.737327 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-q85bd"] Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.739001 4886 scope.go:117] "RemoveContainer" containerID="3351990ec0250b60867c0176f6e8860ad5ac8a74447a35022d9923cd1147f866" Feb 19 21:04:28 crc kubenswrapper[4886]: E0219 21:04:28.739404 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3351990ec0250b60867c0176f6e8860ad5ac8a74447a35022d9923cd1147f866\": container with ID starting with 3351990ec0250b60867c0176f6e8860ad5ac8a74447a35022d9923cd1147f866 not found: ID does not exist" containerID="3351990ec0250b60867c0176f6e8860ad5ac8a74447a35022d9923cd1147f866" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.739690 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3351990ec0250b60867c0176f6e8860ad5ac8a74447a35022d9923cd1147f866"} err="failed to get container status \"3351990ec0250b60867c0176f6e8860ad5ac8a74447a35022d9923cd1147f866\": rpc error: code = NotFound desc = could not find container \"3351990ec0250b60867c0176f6e8860ad5ac8a74447a35022d9923cd1147f866\": container with ID starting with 3351990ec0250b60867c0176f6e8860ad5ac8a74447a35022d9923cd1147f866 not found: ID does not exist" Feb 19 21:04:28 crc kubenswrapper[4886]: I0219 21:04:28.743768 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-q85bd"] Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.593845 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b"] Feb 19 21:04:29 crc kubenswrapper[4886]: E0219 21:04:29.594149 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723" containerName="route-controller-manager" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.594165 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723" containerName="route-controller-manager" Feb 19 21:04:29 crc kubenswrapper[4886]: E0219 21:04:29.594184 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f032d2-80a3-4c8a-a5c8-b82764dc2f18" containerName="controller-manager" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.594192 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f032d2-80a3-4c8a-a5c8-b82764dc2f18" containerName="controller-manager" Feb 19 21:04:29 crc kubenswrapper[4886]: E0219 21:04:29.594206 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.594214 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.594350 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f032d2-80a3-4c8a-a5c8-b82764dc2f18" containerName="controller-manager" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.594369 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723" containerName="route-controller-manager" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.594378 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.595000 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.598636 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.598934 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.598644 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.599580 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.601451 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.601866 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.605150 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt"] Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.606386 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.607179 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.610346 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.610895 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.610968 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.611331 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.611496 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.611909 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.617422 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b"] Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.623382 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt"] Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.692321 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2vgl\" (UniqueName: \"kubernetes.io/projected/aeb33818-5a1e-44b7-9c33-be6fbdb919c6-kube-api-access-s2vgl\") pod \"route-controller-manager-b5ffd9947-6dxnt\" (UID: \"aeb33818-5a1e-44b7-9c33-be6fbdb919c6\") " pod="openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.692426 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aeb33818-5a1e-44b7-9c33-be6fbdb919c6-client-ca\") pod \"route-controller-manager-b5ffd9947-6dxnt\" (UID: \"aeb33818-5a1e-44b7-9c33-be6fbdb919c6\") " pod="openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.692458 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/649ad9ec-84e7-45a7-ad08-586423dfe617-client-ca\") pod \"controller-manager-6c6d64b67b-wpn4b\" (UID: \"649ad9ec-84e7-45a7-ad08-586423dfe617\") " pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.692487 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/649ad9ec-84e7-45a7-ad08-586423dfe617-config\") pod \"controller-manager-6c6d64b67b-wpn4b\" (UID: \"649ad9ec-84e7-45a7-ad08-586423dfe617\") " pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.692519 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/649ad9ec-84e7-45a7-ad08-586423dfe617-serving-cert\") pod \"controller-manager-6c6d64b67b-wpn4b\" (UID: \"649ad9ec-84e7-45a7-ad08-586423dfe617\") " pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.692558 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql6jg\" (UniqueName: \"kubernetes.io/projected/649ad9ec-84e7-45a7-ad08-586423dfe617-kube-api-access-ql6jg\") pod \"controller-manager-6c6d64b67b-wpn4b\" (UID: \"649ad9ec-84e7-45a7-ad08-586423dfe617\") " pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.692598 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeb33818-5a1e-44b7-9c33-be6fbdb919c6-config\") pod \"route-controller-manager-b5ffd9947-6dxnt\" (UID: \"aeb33818-5a1e-44b7-9c33-be6fbdb919c6\") " pod="openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.692654 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/649ad9ec-84e7-45a7-ad08-586423dfe617-proxy-ca-bundles\") pod \"controller-manager-6c6d64b67b-wpn4b\" (UID: \"649ad9ec-84e7-45a7-ad08-586423dfe617\") " pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.692695 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeb33818-5a1e-44b7-9c33-be6fbdb919c6-serving-cert\") pod \"route-controller-manager-b5ffd9947-6dxnt\" (UID: \"aeb33818-5a1e-44b7-9c33-be6fbdb919c6\") " pod="openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.793882 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/649ad9ec-84e7-45a7-ad08-586423dfe617-proxy-ca-bundles\") pod \"controller-manager-6c6d64b67b-wpn4b\" (UID: \"649ad9ec-84e7-45a7-ad08-586423dfe617\") " pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.793987 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeb33818-5a1e-44b7-9c33-be6fbdb919c6-serving-cert\") pod \"route-controller-manager-b5ffd9947-6dxnt\" (UID: \"aeb33818-5a1e-44b7-9c33-be6fbdb919c6\") " pod="openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.794043 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2vgl\" (UniqueName: \"kubernetes.io/projected/aeb33818-5a1e-44b7-9c33-be6fbdb919c6-kube-api-access-s2vgl\") pod \"route-controller-manager-b5ffd9947-6dxnt\" (UID: \"aeb33818-5a1e-44b7-9c33-be6fbdb919c6\") " pod="openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.794095 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aeb33818-5a1e-44b7-9c33-be6fbdb919c6-client-ca\") pod \"route-controller-manager-b5ffd9947-6dxnt\" (UID: \"aeb33818-5a1e-44b7-9c33-be6fbdb919c6\") " pod="openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.794128 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/649ad9ec-84e7-45a7-ad08-586423dfe617-client-ca\") pod \"controller-manager-6c6d64b67b-wpn4b\" (UID: \"649ad9ec-84e7-45a7-ad08-586423dfe617\") " pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.794168 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/649ad9ec-84e7-45a7-ad08-586423dfe617-config\") pod \"controller-manager-6c6d64b67b-wpn4b\" (UID: \"649ad9ec-84e7-45a7-ad08-586423dfe617\") " pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.794213 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/649ad9ec-84e7-45a7-ad08-586423dfe617-serving-cert\") pod \"controller-manager-6c6d64b67b-wpn4b\" (UID: \"649ad9ec-84e7-45a7-ad08-586423dfe617\") " pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.794300 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql6jg\" (UniqueName: \"kubernetes.io/projected/649ad9ec-84e7-45a7-ad08-586423dfe617-kube-api-access-ql6jg\") pod \"controller-manager-6c6d64b67b-wpn4b\" (UID: \"649ad9ec-84e7-45a7-ad08-586423dfe617\") " pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.794348 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeb33818-5a1e-44b7-9c33-be6fbdb919c6-config\") pod \"route-controller-manager-b5ffd9947-6dxnt\" (UID: \"aeb33818-5a1e-44b7-9c33-be6fbdb919c6\") " pod="openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.795666 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/649ad9ec-84e7-45a7-ad08-586423dfe617-client-ca\") pod \"controller-manager-6c6d64b67b-wpn4b\" (UID: \"649ad9ec-84e7-45a7-ad08-586423dfe617\") " pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.796635 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeb33818-5a1e-44b7-9c33-be6fbdb919c6-config\") pod \"route-controller-manager-b5ffd9947-6dxnt\" (UID: \"aeb33818-5a1e-44b7-9c33-be6fbdb919c6\") " pod="openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.797235 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/649ad9ec-84e7-45a7-ad08-586423dfe617-proxy-ca-bundles\") pod \"controller-manager-6c6d64b67b-wpn4b\" (UID: \"649ad9ec-84e7-45a7-ad08-586423dfe617\") " pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.797487 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/649ad9ec-84e7-45a7-ad08-586423dfe617-config\") pod \"controller-manager-6c6d64b67b-wpn4b\" (UID: \"649ad9ec-84e7-45a7-ad08-586423dfe617\") " pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.798432 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aeb33818-5a1e-44b7-9c33-be6fbdb919c6-client-ca\") pod \"route-controller-manager-b5ffd9947-6dxnt\" (UID: \"aeb33818-5a1e-44b7-9c33-be6fbdb919c6\") " pod="openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.810369 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/649ad9ec-84e7-45a7-ad08-586423dfe617-serving-cert\") pod \"controller-manager-6c6d64b67b-wpn4b\" (UID: \"649ad9ec-84e7-45a7-ad08-586423dfe617\") " pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.812339 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeb33818-5a1e-44b7-9c33-be6fbdb919c6-serving-cert\") pod \"route-controller-manager-b5ffd9947-6dxnt\" (UID: \"aeb33818-5a1e-44b7-9c33-be6fbdb919c6\") " pod="openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.814324 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql6jg\" (UniqueName: \"kubernetes.io/projected/649ad9ec-84e7-45a7-ad08-586423dfe617-kube-api-access-ql6jg\") pod \"controller-manager-6c6d64b67b-wpn4b\" (UID: \"649ad9ec-84e7-45a7-ad08-586423dfe617\") " pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.821530 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2vgl\" (UniqueName: \"kubernetes.io/projected/aeb33818-5a1e-44b7-9c33-be6fbdb919c6-kube-api-access-s2vgl\") pod \"route-controller-manager-b5ffd9947-6dxnt\" (UID: \"aeb33818-5a1e-44b7-9c33-be6fbdb919c6\") " pod="openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.919364 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.929118 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt" Feb 19 21:04:29 crc kubenswrapper[4886]: I0219 21:04:29.995507 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b"] Feb 19 21:04:30 crc kubenswrapper[4886]: I0219 21:04:30.010310 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt"] Feb 19 21:04:30 crc kubenswrapper[4886]: I0219 21:04:30.242453 4886 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 19 21:04:30 crc kubenswrapper[4886]: I0219 21:04:30.414903 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt"] Feb 19 21:04:30 crc kubenswrapper[4886]: I0219 21:04:30.470808 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b"] Feb 19 21:04:30 crc kubenswrapper[4886]: W0219 21:04:30.478345 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod649ad9ec_84e7_45a7_ad08_586423dfe617.slice/crio-01b13dda4df024188105c906403e1b72c3dac0861718914f0dc40d584e9ddbbd WatchSource:0}: Error finding container 01b13dda4df024188105c906403e1b72c3dac0861718914f0dc40d584e9ddbbd: Status 404 returned error can't find the container with id 01b13dda4df024188105c906403e1b72c3dac0861718914f0dc40d584e9ddbbd Feb 19 21:04:30 crc kubenswrapper[4886]: I0219 21:04:30.621296 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14f032d2-80a3-4c8a-a5c8-b82764dc2f18" path="/var/lib/kubelet/pods/14f032d2-80a3-4c8a-a5c8-b82764dc2f18/volumes" Feb 19 21:04:30 crc kubenswrapper[4886]: I0219 21:04:30.622940 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723" path="/var/lib/kubelet/pods/1f2075fe-7545-4e4d-bdbf-b9d1fd1d0723/volumes" Feb 19 21:04:30 crc kubenswrapper[4886]: I0219 21:04:30.713832 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" event={"ID":"649ad9ec-84e7-45a7-ad08-586423dfe617","Type":"ContainerStarted","Data":"b356d8c7321473cdbaa562c227fbdfa7f319cd654ee6e8f75347e2fdc4e66e8e"} Feb 19 21:04:30 crc kubenswrapper[4886]: I0219 21:04:30.715360 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" Feb 19 21:04:30 crc kubenswrapper[4886]: I0219 21:04:30.715373 4886 patch_prober.go:28] interesting pod/controller-manager-6c6d64b67b-wpn4b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Feb 19 21:04:30 crc kubenswrapper[4886]: I0219 21:04:30.715433 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" podUID="649ad9ec-84e7-45a7-ad08-586423dfe617" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Feb 19 21:04:30 crc kubenswrapper[4886]: I0219 21:04:30.713916 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" podUID="649ad9ec-84e7-45a7-ad08-586423dfe617" containerName="controller-manager" containerID="cri-o://b356d8c7321473cdbaa562c227fbdfa7f319cd654ee6e8f75347e2fdc4e66e8e" gracePeriod=30 Feb 19 21:04:30 crc kubenswrapper[4886]: I0219 21:04:30.715383 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" event={"ID":"649ad9ec-84e7-45a7-ad08-586423dfe617","Type":"ContainerStarted","Data":"01b13dda4df024188105c906403e1b72c3dac0861718914f0dc40d584e9ddbbd"} Feb 19 21:04:30 crc kubenswrapper[4886]: I0219 21:04:30.717640 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt" event={"ID":"aeb33818-5a1e-44b7-9c33-be6fbdb919c6","Type":"ContainerStarted","Data":"5b11d9c5347739b334168f7023cf76637507299c06fe93a62d4cc60fb6778469"} Feb 19 21:04:30 crc kubenswrapper[4886]: I0219 21:04:30.717687 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt" event={"ID":"aeb33818-5a1e-44b7-9c33-be6fbdb919c6","Type":"ContainerStarted","Data":"2e38c18e4e99b3e920812190f0f0aa998d9757593e0da72ccfc7a448535e31f3"} Feb 19 21:04:30 crc kubenswrapper[4886]: I0219 21:04:30.717814 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt" podUID="aeb33818-5a1e-44b7-9c33-be6fbdb919c6" containerName="route-controller-manager" containerID="cri-o://5b11d9c5347739b334168f7023cf76637507299c06fe93a62d4cc60fb6778469" gracePeriod=30 Feb 19 21:04:30 crc kubenswrapper[4886]: I0219 21:04:30.717931 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt" Feb 19 21:04:30 crc kubenswrapper[4886]: I0219 21:04:30.733344 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" podStartSLOduration=2.733314107 podStartE2EDuration="2.733314107s" podCreationTimestamp="2026-02-19 21:04:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:04:30.73184876 +0000 UTC m=+301.359691820" watchObservedRunningTime="2026-02-19 21:04:30.733314107 +0000 UTC m=+301.361157197" Feb 19 21:04:30 crc kubenswrapper[4886]: I0219 21:04:30.751672 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt" podStartSLOduration=2.751652739 podStartE2EDuration="2.751652739s" podCreationTimestamp="2026-02-19 21:04:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:04:30.749650408 +0000 UTC m=+301.377493478" watchObservedRunningTime="2026-02-19 21:04:30.751652739 +0000 UTC m=+301.379495799" Feb 19 21:04:30 crc kubenswrapper[4886]: I0219 21:04:30.977116 4886 patch_prober.go:28] interesting pod/route-controller-manager-b5ffd9947-6dxnt container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": read tcp 10.217.0.2:55588->10.217.0.60:8443: read: connection reset by peer" start-of-body= Feb 19 21:04:30 crc kubenswrapper[4886]: I0219 21:04:30.977176 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt" podUID="aeb33818-5a1e-44b7-9c33-be6fbdb919c6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": read tcp 10.217.0.2:55588->10.217.0.60:8443: read: connection reset by peer" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.133195 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-6c6d64b67b-wpn4b_649ad9ec-84e7-45a7-ad08-586423dfe617/controller-manager/0.log" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.133256 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.161225 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-767d6fb7c7-cznrl"] Feb 19 21:04:31 crc kubenswrapper[4886]: E0219 21:04:31.161470 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="649ad9ec-84e7-45a7-ad08-586423dfe617" containerName="controller-manager" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.161484 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="649ad9ec-84e7-45a7-ad08-586423dfe617" containerName="controller-manager" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.161582 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="649ad9ec-84e7-45a7-ad08-586423dfe617" containerName="controller-manager" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.161941 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.178461 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-767d6fb7c7-cznrl"] Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.213545 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/649ad9ec-84e7-45a7-ad08-586423dfe617-serving-cert\") pod \"649ad9ec-84e7-45a7-ad08-586423dfe617\" (UID: \"649ad9ec-84e7-45a7-ad08-586423dfe617\") " Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.213869 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/649ad9ec-84e7-45a7-ad08-586423dfe617-proxy-ca-bundles\") pod \"649ad9ec-84e7-45a7-ad08-586423dfe617\" (UID: \"649ad9ec-84e7-45a7-ad08-586423dfe617\") " Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.214187 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/649ad9ec-84e7-45a7-ad08-586423dfe617-client-ca\") pod \"649ad9ec-84e7-45a7-ad08-586423dfe617\" (UID: \"649ad9ec-84e7-45a7-ad08-586423dfe617\") " Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.214343 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql6jg\" (UniqueName: \"kubernetes.io/projected/649ad9ec-84e7-45a7-ad08-586423dfe617-kube-api-access-ql6jg\") pod \"649ad9ec-84e7-45a7-ad08-586423dfe617\" (UID: \"649ad9ec-84e7-45a7-ad08-586423dfe617\") " Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.214478 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/649ad9ec-84e7-45a7-ad08-586423dfe617-config\") pod \"649ad9ec-84e7-45a7-ad08-586423dfe617\" (UID: \"649ad9ec-84e7-45a7-ad08-586423dfe617\") " Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.214882 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/649ad9ec-84e7-45a7-ad08-586423dfe617-client-ca" (OuterVolumeSpecName: "client-ca") pod "649ad9ec-84e7-45a7-ad08-586423dfe617" (UID: "649ad9ec-84e7-45a7-ad08-586423dfe617"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.214943 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/649ad9ec-84e7-45a7-ad08-586423dfe617-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "649ad9ec-84e7-45a7-ad08-586423dfe617" (UID: "649ad9ec-84e7-45a7-ad08-586423dfe617"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.215045 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/649ad9ec-84e7-45a7-ad08-586423dfe617-config" (OuterVolumeSpecName: "config") pod "649ad9ec-84e7-45a7-ad08-586423dfe617" (UID: "649ad9ec-84e7-45a7-ad08-586423dfe617"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.218886 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/649ad9ec-84e7-45a7-ad08-586423dfe617-kube-api-access-ql6jg" (OuterVolumeSpecName: "kube-api-access-ql6jg") pod "649ad9ec-84e7-45a7-ad08-586423dfe617" (UID: "649ad9ec-84e7-45a7-ad08-586423dfe617"). InnerVolumeSpecName "kube-api-access-ql6jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.218970 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/649ad9ec-84e7-45a7-ad08-586423dfe617-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "649ad9ec-84e7-45a7-ad08-586423dfe617" (UID: "649ad9ec-84e7-45a7-ad08-586423dfe617"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.316598 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqtf5\" (UniqueName: \"kubernetes.io/projected/ebaebd34-645c-4d4e-8554-604dc8b1beac-kube-api-access-nqtf5\") pod \"controller-manager-767d6fb7c7-cznrl\" (UID: \"ebaebd34-645c-4d4e-8554-604dc8b1beac\") " pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.316681 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebaebd34-645c-4d4e-8554-604dc8b1beac-serving-cert\") pod \"controller-manager-767d6fb7c7-cznrl\" (UID: \"ebaebd34-645c-4d4e-8554-604dc8b1beac\") " pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.316706 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebaebd34-645c-4d4e-8554-604dc8b1beac-proxy-ca-bundles\") pod \"controller-manager-767d6fb7c7-cznrl\" (UID: \"ebaebd34-645c-4d4e-8554-604dc8b1beac\") " pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.316737 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebaebd34-645c-4d4e-8554-604dc8b1beac-client-ca\") pod \"controller-manager-767d6fb7c7-cznrl\" (UID: \"ebaebd34-645c-4d4e-8554-604dc8b1beac\") " pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.316762 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaebd34-645c-4d4e-8554-604dc8b1beac-config\") pod \"controller-manager-767d6fb7c7-cznrl\" (UID: \"ebaebd34-645c-4d4e-8554-604dc8b1beac\") " pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.316851 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql6jg\" (UniqueName: \"kubernetes.io/projected/649ad9ec-84e7-45a7-ad08-586423dfe617-kube-api-access-ql6jg\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.316871 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/649ad9ec-84e7-45a7-ad08-586423dfe617-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.316881 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/649ad9ec-84e7-45a7-ad08-586423dfe617-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.316890 4886 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/649ad9ec-84e7-45a7-ad08-586423dfe617-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.316900 4886 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/649ad9ec-84e7-45a7-ad08-586423dfe617-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.417732 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebaebd34-645c-4d4e-8554-604dc8b1beac-serving-cert\") pod \"controller-manager-767d6fb7c7-cznrl\" (UID: \"ebaebd34-645c-4d4e-8554-604dc8b1beac\") " pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.417789 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebaebd34-645c-4d4e-8554-604dc8b1beac-proxy-ca-bundles\") pod \"controller-manager-767d6fb7c7-cznrl\" (UID: \"ebaebd34-645c-4d4e-8554-604dc8b1beac\") " pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.417819 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebaebd34-645c-4d4e-8554-604dc8b1beac-client-ca\") pod \"controller-manager-767d6fb7c7-cznrl\" (UID: \"ebaebd34-645c-4d4e-8554-604dc8b1beac\") " pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.417845 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaebd34-645c-4d4e-8554-604dc8b1beac-config\") pod \"controller-manager-767d6fb7c7-cznrl\" (UID: \"ebaebd34-645c-4d4e-8554-604dc8b1beac\") " pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.417901 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqtf5\" (UniqueName: \"kubernetes.io/projected/ebaebd34-645c-4d4e-8554-604dc8b1beac-kube-api-access-nqtf5\") pod \"controller-manager-767d6fb7c7-cznrl\" (UID: \"ebaebd34-645c-4d4e-8554-604dc8b1beac\") " pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.419023 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebaebd34-645c-4d4e-8554-604dc8b1beac-client-ca\") pod \"controller-manager-767d6fb7c7-cznrl\" (UID: \"ebaebd34-645c-4d4e-8554-604dc8b1beac\") " pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.419783 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebaebd34-645c-4d4e-8554-604dc8b1beac-proxy-ca-bundles\") pod \"controller-manager-767d6fb7c7-cznrl\" (UID: \"ebaebd34-645c-4d4e-8554-604dc8b1beac\") " pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.420059 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaebd34-645c-4d4e-8554-604dc8b1beac-config\") pod \"controller-manager-767d6fb7c7-cznrl\" (UID: \"ebaebd34-645c-4d4e-8554-604dc8b1beac\") " pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.424101 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebaebd34-645c-4d4e-8554-604dc8b1beac-serving-cert\") pod \"controller-manager-767d6fb7c7-cznrl\" (UID: \"ebaebd34-645c-4d4e-8554-604dc8b1beac\") " pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.444374 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqtf5\" (UniqueName: \"kubernetes.io/projected/ebaebd34-645c-4d4e-8554-604dc8b1beac-kube-api-access-nqtf5\") pod \"controller-manager-767d6fb7c7-cznrl\" (UID: \"ebaebd34-645c-4d4e-8554-604dc8b1beac\") " pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.477774 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.670248 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-767d6fb7c7-cznrl"] Feb 19 21:04:31 crc kubenswrapper[4886]: W0219 21:04:31.686244 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebaebd34_645c_4d4e_8554_604dc8b1beac.slice/crio-60c80c173a732140e0940bbf8ce76964d7367fa3a61b72fabd73cd33fdce81f7 WatchSource:0}: Error finding container 60c80c173a732140e0940bbf8ce76964d7367fa3a61b72fabd73cd33fdce81f7: Status 404 returned error can't find the container with id 60c80c173a732140e0940bbf8ce76964d7367fa3a61b72fabd73cd33fdce81f7 Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.725973 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-b5ffd9947-6dxnt_aeb33818-5a1e-44b7-9c33-be6fbdb919c6/route-controller-manager/0.log" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.726028 4886 generic.go:334] "Generic (PLEG): container finished" podID="aeb33818-5a1e-44b7-9c33-be6fbdb919c6" containerID="5b11d9c5347739b334168f7023cf76637507299c06fe93a62d4cc60fb6778469" exitCode=255 Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.726092 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt" event={"ID":"aeb33818-5a1e-44b7-9c33-be6fbdb919c6","Type":"ContainerDied","Data":"5b11d9c5347739b334168f7023cf76637507299c06fe93a62d4cc60fb6778469"} Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.727714 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-6c6d64b67b-wpn4b_649ad9ec-84e7-45a7-ad08-586423dfe617/controller-manager/0.log" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.727780 4886 generic.go:334] "Generic (PLEG): container finished" podID="649ad9ec-84e7-45a7-ad08-586423dfe617" containerID="b356d8c7321473cdbaa562c227fbdfa7f319cd654ee6e8f75347e2fdc4e66e8e" exitCode=2 Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.727854 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.727853 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" event={"ID":"649ad9ec-84e7-45a7-ad08-586423dfe617","Type":"ContainerDied","Data":"b356d8c7321473cdbaa562c227fbdfa7f319cd654ee6e8f75347e2fdc4e66e8e"} Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.727994 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b" event={"ID":"649ad9ec-84e7-45a7-ad08-586423dfe617","Type":"ContainerDied","Data":"01b13dda4df024188105c906403e1b72c3dac0861718914f0dc40d584e9ddbbd"} Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.728022 4886 scope.go:117] "RemoveContainer" containerID="b356d8c7321473cdbaa562c227fbdfa7f319cd654ee6e8f75347e2fdc4e66e8e" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.729728 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" event={"ID":"ebaebd34-645c-4d4e-8554-604dc8b1beac","Type":"ContainerStarted","Data":"60c80c173a732140e0940bbf8ce76964d7367fa3a61b72fabd73cd33fdce81f7"} Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.758623 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b"] Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.764119 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6c6d64b67b-wpn4b"] Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.769949 4886 scope.go:117] "RemoveContainer" containerID="b356d8c7321473cdbaa562c227fbdfa7f319cd654ee6e8f75347e2fdc4e66e8e" Feb 19 21:04:31 crc kubenswrapper[4886]: E0219 21:04:31.771972 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b356d8c7321473cdbaa562c227fbdfa7f319cd654ee6e8f75347e2fdc4e66e8e\": container with ID starting with b356d8c7321473cdbaa562c227fbdfa7f319cd654ee6e8f75347e2fdc4e66e8e not found: ID does not exist" containerID="b356d8c7321473cdbaa562c227fbdfa7f319cd654ee6e8f75347e2fdc4e66e8e" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.772021 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b356d8c7321473cdbaa562c227fbdfa7f319cd654ee6e8f75347e2fdc4e66e8e"} err="failed to get container status \"b356d8c7321473cdbaa562c227fbdfa7f319cd654ee6e8f75347e2fdc4e66e8e\": rpc error: code = NotFound desc = could not find container \"b356d8c7321473cdbaa562c227fbdfa7f319cd654ee6e8f75347e2fdc4e66e8e\": container with ID starting with b356d8c7321473cdbaa562c227fbdfa7f319cd654ee6e8f75347e2fdc4e66e8e not found: ID does not exist" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.773336 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-b5ffd9947-6dxnt_aeb33818-5a1e-44b7-9c33-be6fbdb919c6/route-controller-manager/0.log" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.773406 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.892095 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-pmg2k"] Feb 19 21:04:31 crc kubenswrapper[4886]: E0219 21:04:31.892332 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb33818-5a1e-44b7-9c33-be6fbdb919c6" containerName="route-controller-manager" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.892346 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb33818-5a1e-44b7-9c33-be6fbdb919c6" containerName="route-controller-manager" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.892461 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeb33818-5a1e-44b7-9c33-be6fbdb919c6" containerName="route-controller-manager" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.892825 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pmg2k" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.897251 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.897845 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.898420 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.900540 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.909579 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.915243 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-pmg2k"] Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.923182 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aeb33818-5a1e-44b7-9c33-be6fbdb919c6-client-ca\") pod \"aeb33818-5a1e-44b7-9c33-be6fbdb919c6\" (UID: \"aeb33818-5a1e-44b7-9c33-be6fbdb919c6\") " Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.923325 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeb33818-5a1e-44b7-9c33-be6fbdb919c6-serving-cert\") pod \"aeb33818-5a1e-44b7-9c33-be6fbdb919c6\" (UID: \"aeb33818-5a1e-44b7-9c33-be6fbdb919c6\") " Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.923415 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeb33818-5a1e-44b7-9c33-be6fbdb919c6-config\") pod \"aeb33818-5a1e-44b7-9c33-be6fbdb919c6\" (UID: \"aeb33818-5a1e-44b7-9c33-be6fbdb919c6\") " Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.923449 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2vgl\" (UniqueName: \"kubernetes.io/projected/aeb33818-5a1e-44b7-9c33-be6fbdb919c6-kube-api-access-s2vgl\") pod \"aeb33818-5a1e-44b7-9c33-be6fbdb919c6\" (UID: \"aeb33818-5a1e-44b7-9c33-be6fbdb919c6\") " Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.924365 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeb33818-5a1e-44b7-9c33-be6fbdb919c6-client-ca" (OuterVolumeSpecName: "client-ca") pod "aeb33818-5a1e-44b7-9c33-be6fbdb919c6" (UID: "aeb33818-5a1e-44b7-9c33-be6fbdb919c6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.924473 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeb33818-5a1e-44b7-9c33-be6fbdb919c6-config" (OuterVolumeSpecName: "config") pod "aeb33818-5a1e-44b7-9c33-be6fbdb919c6" (UID: "aeb33818-5a1e-44b7-9c33-be6fbdb919c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.957020 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeb33818-5a1e-44b7-9c33-be6fbdb919c6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "aeb33818-5a1e-44b7-9c33-be6fbdb919c6" (UID: "aeb33818-5a1e-44b7-9c33-be6fbdb919c6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:04:31 crc kubenswrapper[4886]: I0219 21:04:31.964949 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeb33818-5a1e-44b7-9c33-be6fbdb919c6-kube-api-access-s2vgl" (OuterVolumeSpecName: "kube-api-access-s2vgl") pod "aeb33818-5a1e-44b7-9c33-be6fbdb919c6" (UID: "aeb33818-5a1e-44b7-9c33-be6fbdb919c6"). InnerVolumeSpecName "kube-api-access-s2vgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.024475 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4af2c594-10f2-4adc-ad72-a1f48a0a4f91-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-pmg2k\" (UID: \"4af2c594-10f2-4adc-ad72-a1f48a0a4f91\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pmg2k" Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.024526 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6grwd\" (UniqueName: \"kubernetes.io/projected/4af2c594-10f2-4adc-ad72-a1f48a0a4f91-kube-api-access-6grwd\") pod \"cluster-monitoring-operator-6d5b84845-pmg2k\" (UID: \"4af2c594-10f2-4adc-ad72-a1f48a0a4f91\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pmg2k" Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.024625 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/4af2c594-10f2-4adc-ad72-a1f48a0a4f91-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-pmg2k\" (UID: \"4af2c594-10f2-4adc-ad72-a1f48a0a4f91\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pmg2k" Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.024658 4886 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aeb33818-5a1e-44b7-9c33-be6fbdb919c6-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.024669 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeb33818-5a1e-44b7-9c33-be6fbdb919c6-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.024678 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeb33818-5a1e-44b7-9c33-be6fbdb919c6-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.024687 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2vgl\" (UniqueName: \"kubernetes.io/projected/aeb33818-5a1e-44b7-9c33-be6fbdb919c6-kube-api-access-s2vgl\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.125422 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/4af2c594-10f2-4adc-ad72-a1f48a0a4f91-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-pmg2k\" (UID: \"4af2c594-10f2-4adc-ad72-a1f48a0a4f91\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pmg2k" Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.125467 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4af2c594-10f2-4adc-ad72-a1f48a0a4f91-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-pmg2k\" (UID: \"4af2c594-10f2-4adc-ad72-a1f48a0a4f91\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pmg2k" Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.125491 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6grwd\" (UniqueName: \"kubernetes.io/projected/4af2c594-10f2-4adc-ad72-a1f48a0a4f91-kube-api-access-6grwd\") pod \"cluster-monitoring-operator-6d5b84845-pmg2k\" (UID: \"4af2c594-10f2-4adc-ad72-a1f48a0a4f91\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pmg2k" Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.126945 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/4af2c594-10f2-4adc-ad72-a1f48a0a4f91-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-pmg2k\" (UID: \"4af2c594-10f2-4adc-ad72-a1f48a0a4f91\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pmg2k" Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.129324 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4af2c594-10f2-4adc-ad72-a1f48a0a4f91-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-pmg2k\" (UID: \"4af2c594-10f2-4adc-ad72-a1f48a0a4f91\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pmg2k" Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.151012 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6grwd\" (UniqueName: \"kubernetes.io/projected/4af2c594-10f2-4adc-ad72-a1f48a0a4f91-kube-api-access-6grwd\") pod \"cluster-monitoring-operator-6d5b84845-pmg2k\" (UID: \"4af2c594-10f2-4adc-ad72-a1f48a0a4f91\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pmg2k" Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.210089 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pmg2k" Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.410328 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-pmg2k"] Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.618700 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="649ad9ec-84e7-45a7-ad08-586423dfe617" path="/var/lib/kubelet/pods/649ad9ec-84e7-45a7-ad08-586423dfe617/volumes" Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.736698 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-b5ffd9947-6dxnt_aeb33818-5a1e-44b7-9c33-be6fbdb919c6/route-controller-manager/0.log" Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.736765 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt" event={"ID":"aeb33818-5a1e-44b7-9c33-be6fbdb919c6","Type":"ContainerDied","Data":"2e38c18e4e99b3e920812190f0f0aa998d9757593e0da72ccfc7a448535e31f3"} Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.736799 4886 scope.go:117] "RemoveContainer" containerID="5b11d9c5347739b334168f7023cf76637507299c06fe93a62d4cc60fb6778469" Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.736915 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt" Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.741483 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" event={"ID":"ebaebd34-645c-4d4e-8554-604dc8b1beac","Type":"ContainerStarted","Data":"a8fec0187d7777bfc9b905312fb884bd4e3ec5104238df3ee44cb4325876fe98"} Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.742084 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.746429 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pmg2k" event={"ID":"4af2c594-10f2-4adc-ad72-a1f48a0a4f91","Type":"ContainerStarted","Data":"63328a15080a2713bfc574f05d17032acf4da59801e4308ea10557dd243ad807"} Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.754134 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.757253 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt"] Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.763819 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b5ffd9947-6dxnt"] Feb 19 21:04:32 crc kubenswrapper[4886]: I0219 21:04:32.775235 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" podStartSLOduration=2.775048031 podStartE2EDuration="2.775048031s" podCreationTimestamp="2026-02-19 21:04:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:04:32.773217454 +0000 UTC m=+303.401060524" watchObservedRunningTime="2026-02-19 21:04:32.775048031 +0000 UTC m=+303.402891131" Feb 19 21:04:33 crc kubenswrapper[4886]: I0219 21:04:33.590022 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf"] Feb 19 21:04:33 crc kubenswrapper[4886]: I0219 21:04:33.591034 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" Feb 19 21:04:33 crc kubenswrapper[4886]: I0219 21:04:33.593496 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 21:04:33 crc kubenswrapper[4886]: I0219 21:04:33.594162 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 21:04:33 crc kubenswrapper[4886]: I0219 21:04:33.595358 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf"] Feb 19 21:04:33 crc kubenswrapper[4886]: I0219 21:04:33.596508 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 21:04:33 crc kubenswrapper[4886]: I0219 21:04:33.596859 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 21:04:33 crc kubenswrapper[4886]: I0219 21:04:33.597028 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 21:04:33 crc kubenswrapper[4886]: I0219 21:04:33.597286 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 21:04:33 crc kubenswrapper[4886]: I0219 21:04:33.746377 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e90185c-2cd5-48d5-9e61-43020c0e21de-serving-cert\") pod \"route-controller-manager-7c7c79bc7d-qmtdf\" (UID: \"2e90185c-2cd5-48d5-9e61-43020c0e21de\") " pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" Feb 19 21:04:33 crc kubenswrapper[4886]: I0219 21:04:33.746558 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqps9\" (UniqueName: \"kubernetes.io/projected/2e90185c-2cd5-48d5-9e61-43020c0e21de-kube-api-access-hqps9\") pod \"route-controller-manager-7c7c79bc7d-qmtdf\" (UID: \"2e90185c-2cd5-48d5-9e61-43020c0e21de\") " pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" Feb 19 21:04:33 crc kubenswrapper[4886]: I0219 21:04:33.746674 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e90185c-2cd5-48d5-9e61-43020c0e21de-client-ca\") pod \"route-controller-manager-7c7c79bc7d-qmtdf\" (UID: \"2e90185c-2cd5-48d5-9e61-43020c0e21de\") " pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" Feb 19 21:04:33 crc kubenswrapper[4886]: I0219 21:04:33.748475 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e90185c-2cd5-48d5-9e61-43020c0e21de-config\") pod \"route-controller-manager-7c7c79bc7d-qmtdf\" (UID: \"2e90185c-2cd5-48d5-9e61-43020c0e21de\") " pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" Feb 19 21:04:33 crc kubenswrapper[4886]: I0219 21:04:33.849550 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqps9\" (UniqueName: \"kubernetes.io/projected/2e90185c-2cd5-48d5-9e61-43020c0e21de-kube-api-access-hqps9\") pod \"route-controller-manager-7c7c79bc7d-qmtdf\" (UID: \"2e90185c-2cd5-48d5-9e61-43020c0e21de\") " pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" Feb 19 21:04:33 crc kubenswrapper[4886]: I0219 21:04:33.849601 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e90185c-2cd5-48d5-9e61-43020c0e21de-client-ca\") pod \"route-controller-manager-7c7c79bc7d-qmtdf\" (UID: \"2e90185c-2cd5-48d5-9e61-43020c0e21de\") " pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" Feb 19 21:04:33 crc kubenswrapper[4886]: I0219 21:04:33.849661 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e90185c-2cd5-48d5-9e61-43020c0e21de-config\") pod \"route-controller-manager-7c7c79bc7d-qmtdf\" (UID: \"2e90185c-2cd5-48d5-9e61-43020c0e21de\") " pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" Feb 19 21:04:33 crc kubenswrapper[4886]: I0219 21:04:33.849689 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e90185c-2cd5-48d5-9e61-43020c0e21de-serving-cert\") pod \"route-controller-manager-7c7c79bc7d-qmtdf\" (UID: \"2e90185c-2cd5-48d5-9e61-43020c0e21de\") " pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" Feb 19 21:04:33 crc kubenswrapper[4886]: I0219 21:04:33.850748 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e90185c-2cd5-48d5-9e61-43020c0e21de-client-ca\") pod \"route-controller-manager-7c7c79bc7d-qmtdf\" (UID: \"2e90185c-2cd5-48d5-9e61-43020c0e21de\") " pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" Feb 19 21:04:33 crc kubenswrapper[4886]: I0219 21:04:33.851477 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e90185c-2cd5-48d5-9e61-43020c0e21de-config\") pod \"route-controller-manager-7c7c79bc7d-qmtdf\" (UID: \"2e90185c-2cd5-48d5-9e61-43020c0e21de\") " pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" Feb 19 21:04:33 crc kubenswrapper[4886]: I0219 21:04:33.858003 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e90185c-2cd5-48d5-9e61-43020c0e21de-serving-cert\") pod \"route-controller-manager-7c7c79bc7d-qmtdf\" (UID: \"2e90185c-2cd5-48d5-9e61-43020c0e21de\") " pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" Feb 19 21:04:33 crc kubenswrapper[4886]: I0219 21:04:33.869585 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqps9\" (UniqueName: \"kubernetes.io/projected/2e90185c-2cd5-48d5-9e61-43020c0e21de-kube-api-access-hqps9\") pod \"route-controller-manager-7c7c79bc7d-qmtdf\" (UID: \"2e90185c-2cd5-48d5-9e61-43020c0e21de\") " pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" Feb 19 21:04:33 crc kubenswrapper[4886]: I0219 21:04:33.919969 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" Feb 19 21:04:34 crc kubenswrapper[4886]: I0219 21:04:34.538585 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf"] Feb 19 21:04:34 crc kubenswrapper[4886]: W0219 21:04:34.550201 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e90185c_2cd5_48d5_9e61_43020c0e21de.slice/crio-c65ca66f53471173c89b44dbd0d59d33ebbf8818be997e45803baac6fa4afaa4 WatchSource:0}: Error finding container c65ca66f53471173c89b44dbd0d59d33ebbf8818be997e45803baac6fa4afaa4: Status 404 returned error can't find the container with id c65ca66f53471173c89b44dbd0d59d33ebbf8818be997e45803baac6fa4afaa4 Feb 19 21:04:34 crc kubenswrapper[4886]: I0219 21:04:34.610961 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeb33818-5a1e-44b7-9c33-be6fbdb919c6" path="/var/lib/kubelet/pods/aeb33818-5a1e-44b7-9c33-be6fbdb919c6/volumes" Feb 19 21:04:34 crc kubenswrapper[4886]: I0219 21:04:34.772999 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" event={"ID":"2e90185c-2cd5-48d5-9e61-43020c0e21de","Type":"ContainerStarted","Data":"dd2646a7450d3125e062b69ad24dace478d7444cb86f6346d3180a1f34417c62"} Feb 19 21:04:34 crc kubenswrapper[4886]: I0219 21:04:34.773055 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" event={"ID":"2e90185c-2cd5-48d5-9e61-43020c0e21de","Type":"ContainerStarted","Data":"c65ca66f53471173c89b44dbd0d59d33ebbf8818be997e45803baac6fa4afaa4"} Feb 19 21:04:34 crc kubenswrapper[4886]: I0219 21:04:34.780321 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pmg2k" event={"ID":"4af2c594-10f2-4adc-ad72-a1f48a0a4f91","Type":"ContainerStarted","Data":"75bf2c8bd9dd098b3efd0885f9a15d0311333de91e786d570bdf2652a5c15883"} Feb 19 21:04:34 crc kubenswrapper[4886]: I0219 21:04:34.803608 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" podStartSLOduration=4.803590993 podStartE2EDuration="4.803590993s" podCreationTimestamp="2026-02-19 21:04:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:04:34.796549406 +0000 UTC m=+305.424392466" watchObservedRunningTime="2026-02-19 21:04:34.803590993 +0000 UTC m=+305.431434043" Feb 19 21:04:34 crc kubenswrapper[4886]: I0219 21:04:34.813115 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pmg2k" podStartSLOduration=1.896180732 podStartE2EDuration="3.813100323s" podCreationTimestamp="2026-02-19 21:04:31 +0000 UTC" firstStartedPulling="2026-02-19 21:04:32.413970756 +0000 UTC m=+303.041813806" lastFinishedPulling="2026-02-19 21:04:34.330890347 +0000 UTC m=+304.958733397" observedRunningTime="2026-02-19 21:04:34.81259771 +0000 UTC m=+305.440440760" watchObservedRunningTime="2026-02-19 21:04:34.813100323 +0000 UTC m=+305.440943373" Feb 19 21:04:34 crc kubenswrapper[4886]: I0219 21:04:34.963279 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p"] Feb 19 21:04:34 crc kubenswrapper[4886]: I0219 21:04:34.963844 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" Feb 19 21:04:34 crc kubenswrapper[4886]: I0219 21:04:34.965489 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 19 21:04:34 crc kubenswrapper[4886]: I0219 21:04:34.970072 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-mss88" Feb 19 21:04:34 crc kubenswrapper[4886]: I0219 21:04:34.977521 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p"] Feb 19 21:04:35 crc kubenswrapper[4886]: I0219 21:04:35.069491 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/adc7f4d1-4f6b-4a8a-843e-119a248a1e17-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-txj4p\" (UID: \"adc7f4d1-4f6b-4a8a-843e-119a248a1e17\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" Feb 19 21:04:35 crc kubenswrapper[4886]: I0219 21:04:35.171053 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/adc7f4d1-4f6b-4a8a-843e-119a248a1e17-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-txj4p\" (UID: \"adc7f4d1-4f6b-4a8a-843e-119a248a1e17\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" Feb 19 21:04:35 crc kubenswrapper[4886]: E0219 21:04:35.171235 4886 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 19 21:04:35 crc kubenswrapper[4886]: E0219 21:04:35.171304 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adc7f4d1-4f6b-4a8a-843e-119a248a1e17-tls-certificates podName:adc7f4d1-4f6b-4a8a-843e-119a248a1e17 nodeName:}" failed. No retries permitted until 2026-02-19 21:04:35.671286684 +0000 UTC m=+306.299129724 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/adc7f4d1-4f6b-4a8a-843e-119a248a1e17-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-txj4p" (UID: "adc7f4d1-4f6b-4a8a-843e-119a248a1e17") : secret "prometheus-operator-admission-webhook-tls" not found Feb 19 21:04:35 crc kubenswrapper[4886]: I0219 21:04:35.677720 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/adc7f4d1-4f6b-4a8a-843e-119a248a1e17-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-txj4p\" (UID: \"adc7f4d1-4f6b-4a8a-843e-119a248a1e17\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" Feb 19 21:04:35 crc kubenswrapper[4886]: E0219 21:04:35.677938 4886 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 19 21:04:35 crc kubenswrapper[4886]: E0219 21:04:35.678215 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adc7f4d1-4f6b-4a8a-843e-119a248a1e17-tls-certificates podName:adc7f4d1-4f6b-4a8a-843e-119a248a1e17 nodeName:}" failed. No retries permitted until 2026-02-19 21:04:36.678195581 +0000 UTC m=+307.306038641 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/adc7f4d1-4f6b-4a8a-843e-119a248a1e17-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-txj4p" (UID: "adc7f4d1-4f6b-4a8a-843e-119a248a1e17") : secret "prometheus-operator-admission-webhook-tls" not found Feb 19 21:04:35 crc kubenswrapper[4886]: I0219 21:04:35.783775 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" Feb 19 21:04:35 crc kubenswrapper[4886]: I0219 21:04:35.791935 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" Feb 19 21:04:36 crc kubenswrapper[4886]: I0219 21:04:36.692155 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/adc7f4d1-4f6b-4a8a-843e-119a248a1e17-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-txj4p\" (UID: \"adc7f4d1-4f6b-4a8a-843e-119a248a1e17\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" Feb 19 21:04:36 crc kubenswrapper[4886]: E0219 21:04:36.692462 4886 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 19 21:04:36 crc kubenswrapper[4886]: E0219 21:04:36.692610 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adc7f4d1-4f6b-4a8a-843e-119a248a1e17-tls-certificates podName:adc7f4d1-4f6b-4a8a-843e-119a248a1e17 nodeName:}" failed. No retries permitted until 2026-02-19 21:04:38.692574129 +0000 UTC m=+309.320417219 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/adc7f4d1-4f6b-4a8a-843e-119a248a1e17-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-txj4p" (UID: "adc7f4d1-4f6b-4a8a-843e-119a248a1e17") : secret "prometheus-operator-admission-webhook-tls" not found Feb 19 21:04:38 crc kubenswrapper[4886]: I0219 21:04:38.718719 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/adc7f4d1-4f6b-4a8a-843e-119a248a1e17-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-txj4p\" (UID: \"adc7f4d1-4f6b-4a8a-843e-119a248a1e17\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" Feb 19 21:04:38 crc kubenswrapper[4886]: E0219 21:04:38.718856 4886 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 19 21:04:38 crc kubenswrapper[4886]: E0219 21:04:38.719233 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adc7f4d1-4f6b-4a8a-843e-119a248a1e17-tls-certificates podName:adc7f4d1-4f6b-4a8a-843e-119a248a1e17 nodeName:}" failed. No retries permitted until 2026-02-19 21:04:42.719213534 +0000 UTC m=+313.347056594 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/adc7f4d1-4f6b-4a8a-843e-119a248a1e17-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-txj4p" (UID: "adc7f4d1-4f6b-4a8a-843e-119a248a1e17") : secret "prometheus-operator-admission-webhook-tls" not found Feb 19 21:04:42 crc kubenswrapper[4886]: I0219 21:04:42.806384 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/adc7f4d1-4f6b-4a8a-843e-119a248a1e17-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-txj4p\" (UID: \"adc7f4d1-4f6b-4a8a-843e-119a248a1e17\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" Feb 19 21:04:42 crc kubenswrapper[4886]: E0219 21:04:42.806691 4886 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 19 21:04:42 crc kubenswrapper[4886]: E0219 21:04:42.806815 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adc7f4d1-4f6b-4a8a-843e-119a248a1e17-tls-certificates podName:adc7f4d1-4f6b-4a8a-843e-119a248a1e17 nodeName:}" failed. No retries permitted until 2026-02-19 21:04:50.806784645 +0000 UTC m=+321.434627725 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/adc7f4d1-4f6b-4a8a-843e-119a248a1e17-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-txj4p" (UID: "adc7f4d1-4f6b-4a8a-843e-119a248a1e17") : secret "prometheus-operator-admission-webhook-tls" not found Feb 19 21:04:45 crc kubenswrapper[4886]: I0219 21:04:45.799314 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xrdmp"] Feb 19 21:04:45 crc kubenswrapper[4886]: I0219 21:04:45.800792 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrdmp" Feb 19 21:04:45 crc kubenswrapper[4886]: I0219 21:04:45.803492 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 21:04:45 crc kubenswrapper[4886]: I0219 21:04:45.822645 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrdmp"] Feb 19 21:04:45 crc kubenswrapper[4886]: I0219 21:04:45.948816 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be8c5521-0d01-4afe-9a3e-7ee2ba8d014c-catalog-content\") pod \"redhat-marketplace-xrdmp\" (UID: \"be8c5521-0d01-4afe-9a3e-7ee2ba8d014c\") " pod="openshift-marketplace/redhat-marketplace-xrdmp" Feb 19 21:04:45 crc kubenswrapper[4886]: I0219 21:04:45.948908 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75f27\" (UniqueName: \"kubernetes.io/projected/be8c5521-0d01-4afe-9a3e-7ee2ba8d014c-kube-api-access-75f27\") pod \"redhat-marketplace-xrdmp\" (UID: \"be8c5521-0d01-4afe-9a3e-7ee2ba8d014c\") " pod="openshift-marketplace/redhat-marketplace-xrdmp" Feb 19 21:04:45 crc kubenswrapper[4886]: I0219 21:04:45.949094 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be8c5521-0d01-4afe-9a3e-7ee2ba8d014c-utilities\") pod \"redhat-marketplace-xrdmp\" (UID: \"be8c5521-0d01-4afe-9a3e-7ee2ba8d014c\") " pod="openshift-marketplace/redhat-marketplace-xrdmp" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.050545 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be8c5521-0d01-4afe-9a3e-7ee2ba8d014c-utilities\") pod \"redhat-marketplace-xrdmp\" (UID: \"be8c5521-0d01-4afe-9a3e-7ee2ba8d014c\") " pod="openshift-marketplace/redhat-marketplace-xrdmp" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.051032 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be8c5521-0d01-4afe-9a3e-7ee2ba8d014c-catalog-content\") pod \"redhat-marketplace-xrdmp\" (UID: \"be8c5521-0d01-4afe-9a3e-7ee2ba8d014c\") " pod="openshift-marketplace/redhat-marketplace-xrdmp" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.051267 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75f27\" (UniqueName: \"kubernetes.io/projected/be8c5521-0d01-4afe-9a3e-7ee2ba8d014c-kube-api-access-75f27\") pod \"redhat-marketplace-xrdmp\" (UID: \"be8c5521-0d01-4afe-9a3e-7ee2ba8d014c\") " pod="openshift-marketplace/redhat-marketplace-xrdmp" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.051551 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be8c5521-0d01-4afe-9a3e-7ee2ba8d014c-utilities\") pod \"redhat-marketplace-xrdmp\" (UID: \"be8c5521-0d01-4afe-9a3e-7ee2ba8d014c\") " pod="openshift-marketplace/redhat-marketplace-xrdmp" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.051708 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be8c5521-0d01-4afe-9a3e-7ee2ba8d014c-catalog-content\") pod \"redhat-marketplace-xrdmp\" (UID: \"be8c5521-0d01-4afe-9a3e-7ee2ba8d014c\") " pod="openshift-marketplace/redhat-marketplace-xrdmp" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.080474 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75f27\" (UniqueName: \"kubernetes.io/projected/be8c5521-0d01-4afe-9a3e-7ee2ba8d014c-kube-api-access-75f27\") pod \"redhat-marketplace-xrdmp\" (UID: \"be8c5521-0d01-4afe-9a3e-7ee2ba8d014c\") " pod="openshift-marketplace/redhat-marketplace-xrdmp" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.121768 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrdmp" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.219863 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hvf8k"] Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.220996 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.236929 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hvf8k"] Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.355230 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hvf8k\" (UID: \"d6e47170-8742-401a-86bd-967c3fc623be\") " pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.355314 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d6e47170-8742-401a-86bd-967c3fc623be-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hvf8k\" (UID: \"d6e47170-8742-401a-86bd-967c3fc623be\") " pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.355357 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6e47170-8742-401a-86bd-967c3fc623be-trusted-ca\") pod \"image-registry-66df7c8f76-hvf8k\" (UID: \"d6e47170-8742-401a-86bd-967c3fc623be\") " pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.355381 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d6e47170-8742-401a-86bd-967c3fc623be-registry-certificates\") pod \"image-registry-66df7c8f76-hvf8k\" (UID: \"d6e47170-8742-401a-86bd-967c3fc623be\") " pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.355433 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d6e47170-8742-401a-86bd-967c3fc623be-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hvf8k\" (UID: \"d6e47170-8742-401a-86bd-967c3fc623be\") " pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.355463 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgmj2\" (UniqueName: \"kubernetes.io/projected/d6e47170-8742-401a-86bd-967c3fc623be-kube-api-access-mgmj2\") pod \"image-registry-66df7c8f76-hvf8k\" (UID: \"d6e47170-8742-401a-86bd-967c3fc623be\") " pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.355485 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6e47170-8742-401a-86bd-967c3fc623be-bound-sa-token\") pod \"image-registry-66df7c8f76-hvf8k\" (UID: \"d6e47170-8742-401a-86bd-967c3fc623be\") " pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.355528 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d6e47170-8742-401a-86bd-967c3fc623be-registry-tls\") pod \"image-registry-66df7c8f76-hvf8k\" (UID: \"d6e47170-8742-401a-86bd-967c3fc623be\") " pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.397057 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-99mqn"] Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.397215 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hvf8k\" (UID: \"d6e47170-8742-401a-86bd-967c3fc623be\") " pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.401038 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-99mqn"] Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.401203 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99mqn" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.403520 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.456896 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6e47170-8742-401a-86bd-967c3fc623be-trusted-ca\") pod \"image-registry-66df7c8f76-hvf8k\" (UID: \"d6e47170-8742-401a-86bd-967c3fc623be\") " pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.457227 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d6e47170-8742-401a-86bd-967c3fc623be-registry-certificates\") pod \"image-registry-66df7c8f76-hvf8k\" (UID: \"d6e47170-8742-401a-86bd-967c3fc623be\") " pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.457312 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d6e47170-8742-401a-86bd-967c3fc623be-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hvf8k\" (UID: \"d6e47170-8742-401a-86bd-967c3fc623be\") " pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.457339 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgmj2\" (UniqueName: \"kubernetes.io/projected/d6e47170-8742-401a-86bd-967c3fc623be-kube-api-access-mgmj2\") pod \"image-registry-66df7c8f76-hvf8k\" (UID: \"d6e47170-8742-401a-86bd-967c3fc623be\") " pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.457361 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6e47170-8742-401a-86bd-967c3fc623be-bound-sa-token\") pod \"image-registry-66df7c8f76-hvf8k\" (UID: \"d6e47170-8742-401a-86bd-967c3fc623be\") " pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.457405 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d6e47170-8742-401a-86bd-967c3fc623be-registry-tls\") pod \"image-registry-66df7c8f76-hvf8k\" (UID: \"d6e47170-8742-401a-86bd-967c3fc623be\") " pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.457454 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d6e47170-8742-401a-86bd-967c3fc623be-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hvf8k\" (UID: \"d6e47170-8742-401a-86bd-967c3fc623be\") " pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.457732 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d6e47170-8742-401a-86bd-967c3fc623be-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hvf8k\" (UID: \"d6e47170-8742-401a-86bd-967c3fc623be\") " pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.458669 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6e47170-8742-401a-86bd-967c3fc623be-trusted-ca\") pod \"image-registry-66df7c8f76-hvf8k\" (UID: \"d6e47170-8742-401a-86bd-967c3fc623be\") " pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.458850 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d6e47170-8742-401a-86bd-967c3fc623be-registry-certificates\") pod \"image-registry-66df7c8f76-hvf8k\" (UID: \"d6e47170-8742-401a-86bd-967c3fc623be\") " pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.462037 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d6e47170-8742-401a-86bd-967c3fc623be-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hvf8k\" (UID: \"d6e47170-8742-401a-86bd-967c3fc623be\") " pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.474065 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d6e47170-8742-401a-86bd-967c3fc623be-registry-tls\") pod \"image-registry-66df7c8f76-hvf8k\" (UID: \"d6e47170-8742-401a-86bd-967c3fc623be\") " pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.474659 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6e47170-8742-401a-86bd-967c3fc623be-bound-sa-token\") pod \"image-registry-66df7c8f76-hvf8k\" (UID: \"d6e47170-8742-401a-86bd-967c3fc623be\") " pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.485657 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgmj2\" (UniqueName: \"kubernetes.io/projected/d6e47170-8742-401a-86bd-967c3fc623be-kube-api-access-mgmj2\") pod \"image-registry-66df7c8f76-hvf8k\" (UID: \"d6e47170-8742-401a-86bd-967c3fc623be\") " pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.543841 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.558059 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mzvg\" (UniqueName: \"kubernetes.io/projected/eebd68ba-8223-48ca-ad2a-fdc786eedad2-kube-api-access-2mzvg\") pod \"redhat-operators-99mqn\" (UID: \"eebd68ba-8223-48ca-ad2a-fdc786eedad2\") " pod="openshift-marketplace/redhat-operators-99mqn" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.558135 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eebd68ba-8223-48ca-ad2a-fdc786eedad2-catalog-content\") pod \"redhat-operators-99mqn\" (UID: \"eebd68ba-8223-48ca-ad2a-fdc786eedad2\") " pod="openshift-marketplace/redhat-operators-99mqn" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.558215 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eebd68ba-8223-48ca-ad2a-fdc786eedad2-utilities\") pod \"redhat-operators-99mqn\" (UID: \"eebd68ba-8223-48ca-ad2a-fdc786eedad2\") " pod="openshift-marketplace/redhat-operators-99mqn" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.658817 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eebd68ba-8223-48ca-ad2a-fdc786eedad2-utilities\") pod \"redhat-operators-99mqn\" (UID: \"eebd68ba-8223-48ca-ad2a-fdc786eedad2\") " pod="openshift-marketplace/redhat-operators-99mqn" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.658883 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mzvg\" (UniqueName: \"kubernetes.io/projected/eebd68ba-8223-48ca-ad2a-fdc786eedad2-kube-api-access-2mzvg\") pod \"redhat-operators-99mqn\" (UID: \"eebd68ba-8223-48ca-ad2a-fdc786eedad2\") " pod="openshift-marketplace/redhat-operators-99mqn" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.658926 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eebd68ba-8223-48ca-ad2a-fdc786eedad2-catalog-content\") pod \"redhat-operators-99mqn\" (UID: \"eebd68ba-8223-48ca-ad2a-fdc786eedad2\") " pod="openshift-marketplace/redhat-operators-99mqn" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.659454 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eebd68ba-8223-48ca-ad2a-fdc786eedad2-catalog-content\") pod \"redhat-operators-99mqn\" (UID: \"eebd68ba-8223-48ca-ad2a-fdc786eedad2\") " pod="openshift-marketplace/redhat-operators-99mqn" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.659667 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eebd68ba-8223-48ca-ad2a-fdc786eedad2-utilities\") pod \"redhat-operators-99mqn\" (UID: \"eebd68ba-8223-48ca-ad2a-fdc786eedad2\") " pod="openshift-marketplace/redhat-operators-99mqn" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.685188 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mzvg\" (UniqueName: \"kubernetes.io/projected/eebd68ba-8223-48ca-ad2a-fdc786eedad2-kube-api-access-2mzvg\") pod \"redhat-operators-99mqn\" (UID: \"eebd68ba-8223-48ca-ad2a-fdc786eedad2\") " pod="openshift-marketplace/redhat-operators-99mqn" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.702721 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrdmp"] Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.716622 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99mqn" Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.844625 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrdmp" event={"ID":"be8c5521-0d01-4afe-9a3e-7ee2ba8d014c","Type":"ContainerStarted","Data":"99f078918e49d76f88ec267b7d5badd3fc7ceb6d56e11c78eb47d18f13eafe32"} Feb 19 21:04:46 crc kubenswrapper[4886]: I0219 21:04:46.844905 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrdmp" event={"ID":"be8c5521-0d01-4afe-9a3e-7ee2ba8d014c","Type":"ContainerStarted","Data":"b36919afaa15c39b6bbc072c7abd5810de5f043636a2e7653d7b61afa168e99a"} Feb 19 21:04:47 crc kubenswrapper[4886]: I0219 21:04:47.029900 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hvf8k"] Feb 19 21:04:47 crc kubenswrapper[4886]: I0219 21:04:47.143070 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-99mqn"] Feb 19 21:04:47 crc kubenswrapper[4886]: W0219 21:04:47.147376 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeebd68ba_8223_48ca_ad2a_fdc786eedad2.slice/crio-890e9ee98b1e6053f0d1ba70f44381e6f3396399348d97d4f3489c3bf1991b29 WatchSource:0}: Error finding container 890e9ee98b1e6053f0d1ba70f44381e6f3396399348d97d4f3489c3bf1991b29: Status 404 returned error can't find the container with id 890e9ee98b1e6053f0d1ba70f44381e6f3396399348d97d4f3489c3bf1991b29 Feb 19 21:04:47 crc kubenswrapper[4886]: I0219 21:04:47.853129 4886 generic.go:334] "Generic (PLEG): container finished" podID="eebd68ba-8223-48ca-ad2a-fdc786eedad2" containerID="55db788092903477a380359864c47cbc69ce5f3cd2511307ab7cdfcb038c738a" exitCode=0 Feb 19 21:04:47 crc kubenswrapper[4886]: I0219 21:04:47.853204 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99mqn" event={"ID":"eebd68ba-8223-48ca-ad2a-fdc786eedad2","Type":"ContainerDied","Data":"55db788092903477a380359864c47cbc69ce5f3cd2511307ab7cdfcb038c738a"} Feb 19 21:04:47 crc kubenswrapper[4886]: I0219 21:04:47.853479 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99mqn" event={"ID":"eebd68ba-8223-48ca-ad2a-fdc786eedad2","Type":"ContainerStarted","Data":"890e9ee98b1e6053f0d1ba70f44381e6f3396399348d97d4f3489c3bf1991b29"} Feb 19 21:04:47 crc kubenswrapper[4886]: I0219 21:04:47.854875 4886 generic.go:334] "Generic (PLEG): container finished" podID="be8c5521-0d01-4afe-9a3e-7ee2ba8d014c" containerID="99f078918e49d76f88ec267b7d5badd3fc7ceb6d56e11c78eb47d18f13eafe32" exitCode=0 Feb 19 21:04:47 crc kubenswrapper[4886]: I0219 21:04:47.854930 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrdmp" event={"ID":"be8c5521-0d01-4afe-9a3e-7ee2ba8d014c","Type":"ContainerDied","Data":"99f078918e49d76f88ec267b7d5badd3fc7ceb6d56e11c78eb47d18f13eafe32"} Feb 19 21:04:47 crc kubenswrapper[4886]: I0219 21:04:47.856714 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" event={"ID":"d6e47170-8742-401a-86bd-967c3fc623be","Type":"ContainerStarted","Data":"79f9ca29bb68418a58600104dbf0a0d883d0a1825a641f4264c702789c09fd18"} Feb 19 21:04:47 crc kubenswrapper[4886]: I0219 21:04:47.856751 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" event={"ID":"d6e47170-8742-401a-86bd-967c3fc623be","Type":"ContainerStarted","Data":"02841dc9dda374403719a20708fc0537b3dc2299262772615021209a48f9b54f"} Feb 19 21:04:47 crc kubenswrapper[4886]: I0219 21:04:47.856911 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:04:47 crc kubenswrapper[4886]: I0219 21:04:47.926745 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" podStartSLOduration=1.926722807 podStartE2EDuration="1.926722807s" podCreationTimestamp="2026-02-19 21:04:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:04:47.921232629 +0000 UTC m=+318.549075699" watchObservedRunningTime="2026-02-19 21:04:47.926722807 +0000 UTC m=+318.554565887" Feb 19 21:04:47 crc kubenswrapper[4886]: I0219 21:04:47.965375 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-767d6fb7c7-cznrl"] Feb 19 21:04:47 crc kubenswrapper[4886]: I0219 21:04:47.965626 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" podUID="ebaebd34-645c-4d4e-8554-604dc8b1beac" containerName="controller-manager" containerID="cri-o://a8fec0187d7777bfc9b905312fb884bd4e3ec5104238df3ee44cb4325876fe98" gracePeriod=30 Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.195274 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wnld7"] Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.197191 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnld7" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.199174 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.223001 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wnld7"] Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.286357 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/989aa724-d476-4df1-9849-22c3acf90103-utilities\") pod \"community-operators-wnld7\" (UID: \"989aa724-d476-4df1-9849-22c3acf90103\") " pod="openshift-marketplace/community-operators-wnld7" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.286499 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/989aa724-d476-4df1-9849-22c3acf90103-catalog-content\") pod \"community-operators-wnld7\" (UID: \"989aa724-d476-4df1-9849-22c3acf90103\") " pod="openshift-marketplace/community-operators-wnld7" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.286557 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwgxx\" (UniqueName: \"kubernetes.io/projected/989aa724-d476-4df1-9849-22c3acf90103-kube-api-access-qwgxx\") pod \"community-operators-wnld7\" (UID: \"989aa724-d476-4df1-9849-22c3acf90103\") " pod="openshift-marketplace/community-operators-wnld7" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.388130 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/989aa724-d476-4df1-9849-22c3acf90103-catalog-content\") pod \"community-operators-wnld7\" (UID: \"989aa724-d476-4df1-9849-22c3acf90103\") " pod="openshift-marketplace/community-operators-wnld7" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.388219 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwgxx\" (UniqueName: \"kubernetes.io/projected/989aa724-d476-4df1-9849-22c3acf90103-kube-api-access-qwgxx\") pod \"community-operators-wnld7\" (UID: \"989aa724-d476-4df1-9849-22c3acf90103\") " pod="openshift-marketplace/community-operators-wnld7" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.388362 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/989aa724-d476-4df1-9849-22c3acf90103-utilities\") pod \"community-operators-wnld7\" (UID: \"989aa724-d476-4df1-9849-22c3acf90103\") " pod="openshift-marketplace/community-operators-wnld7" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.388788 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/989aa724-d476-4df1-9849-22c3acf90103-catalog-content\") pod \"community-operators-wnld7\" (UID: \"989aa724-d476-4df1-9849-22c3acf90103\") " pod="openshift-marketplace/community-operators-wnld7" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.388981 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/989aa724-d476-4df1-9849-22c3acf90103-utilities\") pod \"community-operators-wnld7\" (UID: \"989aa724-d476-4df1-9849-22c3acf90103\") " pod="openshift-marketplace/community-operators-wnld7" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.415821 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwgxx\" (UniqueName: \"kubernetes.io/projected/989aa724-d476-4df1-9849-22c3acf90103-kube-api-access-qwgxx\") pod \"community-operators-wnld7\" (UID: \"989aa724-d476-4df1-9849-22c3acf90103\") " pod="openshift-marketplace/community-operators-wnld7" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.543008 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnld7" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.610447 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.694086 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebaebd34-645c-4d4e-8554-604dc8b1beac-client-ca\") pod \"ebaebd34-645c-4d4e-8554-604dc8b1beac\" (UID: \"ebaebd34-645c-4d4e-8554-604dc8b1beac\") " Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.695049 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebaebd34-645c-4d4e-8554-604dc8b1beac-client-ca" (OuterVolumeSpecName: "client-ca") pod "ebaebd34-645c-4d4e-8554-604dc8b1beac" (UID: "ebaebd34-645c-4d4e-8554-604dc8b1beac"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.696805 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqtf5\" (UniqueName: \"kubernetes.io/projected/ebaebd34-645c-4d4e-8554-604dc8b1beac-kube-api-access-nqtf5\") pod \"ebaebd34-645c-4d4e-8554-604dc8b1beac\" (UID: \"ebaebd34-645c-4d4e-8554-604dc8b1beac\") " Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.696918 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebaebd34-645c-4d4e-8554-604dc8b1beac-serving-cert\") pod \"ebaebd34-645c-4d4e-8554-604dc8b1beac\" (UID: \"ebaebd34-645c-4d4e-8554-604dc8b1beac\") " Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.696949 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebaebd34-645c-4d4e-8554-604dc8b1beac-proxy-ca-bundles\") pod \"ebaebd34-645c-4d4e-8554-604dc8b1beac\" (UID: \"ebaebd34-645c-4d4e-8554-604dc8b1beac\") " Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.696975 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaebd34-645c-4d4e-8554-604dc8b1beac-config\") pod \"ebaebd34-645c-4d4e-8554-604dc8b1beac\" (UID: \"ebaebd34-645c-4d4e-8554-604dc8b1beac\") " Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.697337 4886 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebaebd34-645c-4d4e-8554-604dc8b1beac-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.698054 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebaebd34-645c-4d4e-8554-604dc8b1beac-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ebaebd34-645c-4d4e-8554-604dc8b1beac" (UID: "ebaebd34-645c-4d4e-8554-604dc8b1beac"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.698941 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebaebd34-645c-4d4e-8554-604dc8b1beac-config" (OuterVolumeSpecName: "config") pod "ebaebd34-645c-4d4e-8554-604dc8b1beac" (UID: "ebaebd34-645c-4d4e-8554-604dc8b1beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.706486 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebaebd34-645c-4d4e-8554-604dc8b1beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ebaebd34-645c-4d4e-8554-604dc8b1beac" (UID: "ebaebd34-645c-4d4e-8554-604dc8b1beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.715477 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebaebd34-645c-4d4e-8554-604dc8b1beac-kube-api-access-nqtf5" (OuterVolumeSpecName: "kube-api-access-nqtf5") pod "ebaebd34-645c-4d4e-8554-604dc8b1beac" (UID: "ebaebd34-645c-4d4e-8554-604dc8b1beac"). InnerVolumeSpecName "kube-api-access-nqtf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.795249 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xxqfx"] Feb 19 21:04:48 crc kubenswrapper[4886]: E0219 21:04:48.795503 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebaebd34-645c-4d4e-8554-604dc8b1beac" containerName="controller-manager" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.795518 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebaebd34-645c-4d4e-8554-604dc8b1beac" containerName="controller-manager" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.795645 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebaebd34-645c-4d4e-8554-604dc8b1beac" containerName="controller-manager" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.796407 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xxqfx" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.798382 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.798796 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqtf5\" (UniqueName: \"kubernetes.io/projected/ebaebd34-645c-4d4e-8554-604dc8b1beac-kube-api-access-nqtf5\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.799067 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebaebd34-645c-4d4e-8554-604dc8b1beac-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.799081 4886 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebaebd34-645c-4d4e-8554-604dc8b1beac-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.799092 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaebd34-645c-4d4e-8554-604dc8b1beac-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.812697 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xxqfx"] Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.819491 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wnld7"] Feb 19 21:04:48 crc kubenswrapper[4886]: W0219 21:04:48.865255 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod989aa724_d476_4df1_9849_22c3acf90103.slice/crio-e51046365ad976ba7863c2df133c838ea53510e30abb4d18f411fe68a385673f WatchSource:0}: Error finding container e51046365ad976ba7863c2df133c838ea53510e30abb4d18f411fe68a385673f: Status 404 returned error can't find the container with id e51046365ad976ba7863c2df133c838ea53510e30abb4d18f411fe68a385673f Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.868954 4886 generic.go:334] "Generic (PLEG): container finished" podID="ebaebd34-645c-4d4e-8554-604dc8b1beac" containerID="a8fec0187d7777bfc9b905312fb884bd4e3ec5104238df3ee44cb4325876fe98" exitCode=0 Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.869011 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.869063 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" event={"ID":"ebaebd34-645c-4d4e-8554-604dc8b1beac","Type":"ContainerDied","Data":"a8fec0187d7777bfc9b905312fb884bd4e3ec5104238df3ee44cb4325876fe98"} Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.869095 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-767d6fb7c7-cznrl" event={"ID":"ebaebd34-645c-4d4e-8554-604dc8b1beac","Type":"ContainerDied","Data":"60c80c173a732140e0940bbf8ce76964d7367fa3a61b72fabd73cd33fdce81f7"} Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.869117 4886 scope.go:117] "RemoveContainer" containerID="a8fec0187d7777bfc9b905312fb884bd4e3ec5104238df3ee44cb4325876fe98" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.874152 4886 generic.go:334] "Generic (PLEG): container finished" podID="be8c5521-0d01-4afe-9a3e-7ee2ba8d014c" containerID="cb34c22335af2acad08dcc63478774b16ab5ea4ecff9e3fc9b4702d41b65fc59" exitCode=0 Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.874257 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrdmp" event={"ID":"be8c5521-0d01-4afe-9a3e-7ee2ba8d014c","Type":"ContainerDied","Data":"cb34c22335af2acad08dcc63478774b16ab5ea4ecff9e3fc9b4702d41b65fc59"} Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.899891 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c92403c4-f071-44a9-a322-4e849ae93c8c-catalog-content\") pod \"certified-operators-xxqfx\" (UID: \"c92403c4-f071-44a9-a322-4e849ae93c8c\") " pod="openshift-marketplace/certified-operators-xxqfx" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.900089 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c92403c4-f071-44a9-a322-4e849ae93c8c-utilities\") pod \"certified-operators-xxqfx\" (UID: \"c92403c4-f071-44a9-a322-4e849ae93c8c\") " pod="openshift-marketplace/certified-operators-xxqfx" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.900238 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k55w7\" (UniqueName: \"kubernetes.io/projected/c92403c4-f071-44a9-a322-4e849ae93c8c-kube-api-access-k55w7\") pod \"certified-operators-xxqfx\" (UID: \"c92403c4-f071-44a9-a322-4e849ae93c8c\") " pod="openshift-marketplace/certified-operators-xxqfx" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.901335 4886 scope.go:117] "RemoveContainer" containerID="a8fec0187d7777bfc9b905312fb884bd4e3ec5104238df3ee44cb4325876fe98" Feb 19 21:04:48 crc kubenswrapper[4886]: E0219 21:04:48.902175 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8fec0187d7777bfc9b905312fb884bd4e3ec5104238df3ee44cb4325876fe98\": container with ID starting with a8fec0187d7777bfc9b905312fb884bd4e3ec5104238df3ee44cb4325876fe98 not found: ID does not exist" containerID="a8fec0187d7777bfc9b905312fb884bd4e3ec5104238df3ee44cb4325876fe98" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.902217 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8fec0187d7777bfc9b905312fb884bd4e3ec5104238df3ee44cb4325876fe98"} err="failed to get container status \"a8fec0187d7777bfc9b905312fb884bd4e3ec5104238df3ee44cb4325876fe98\": rpc error: code = NotFound desc = could not find container \"a8fec0187d7777bfc9b905312fb884bd4e3ec5104238df3ee44cb4325876fe98\": container with ID starting with a8fec0187d7777bfc9b905312fb884bd4e3ec5104238df3ee44cb4325876fe98 not found: ID does not exist" Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.908250 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-767d6fb7c7-cznrl"] Feb 19 21:04:48 crc kubenswrapper[4886]: I0219 21:04:48.911217 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-767d6fb7c7-cznrl"] Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.001377 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c92403c4-f071-44a9-a322-4e849ae93c8c-utilities\") pod \"certified-operators-xxqfx\" (UID: \"c92403c4-f071-44a9-a322-4e849ae93c8c\") " pod="openshift-marketplace/certified-operators-xxqfx" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.001447 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k55w7\" (UniqueName: \"kubernetes.io/projected/c92403c4-f071-44a9-a322-4e849ae93c8c-kube-api-access-k55w7\") pod \"certified-operators-xxqfx\" (UID: \"c92403c4-f071-44a9-a322-4e849ae93c8c\") " pod="openshift-marketplace/certified-operators-xxqfx" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.001488 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c92403c4-f071-44a9-a322-4e849ae93c8c-catalog-content\") pod \"certified-operators-xxqfx\" (UID: \"c92403c4-f071-44a9-a322-4e849ae93c8c\") " pod="openshift-marketplace/certified-operators-xxqfx" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.001991 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c92403c4-f071-44a9-a322-4e849ae93c8c-catalog-content\") pod \"certified-operators-xxqfx\" (UID: \"c92403c4-f071-44a9-a322-4e849ae93c8c\") " pod="openshift-marketplace/certified-operators-xxqfx" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.002886 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c92403c4-f071-44a9-a322-4e849ae93c8c-utilities\") pod \"certified-operators-xxqfx\" (UID: \"c92403c4-f071-44a9-a322-4e849ae93c8c\") " pod="openshift-marketplace/certified-operators-xxqfx" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.017221 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k55w7\" (UniqueName: \"kubernetes.io/projected/c92403c4-f071-44a9-a322-4e849ae93c8c-kube-api-access-k55w7\") pod \"certified-operators-xxqfx\" (UID: \"c92403c4-f071-44a9-a322-4e849ae93c8c\") " pod="openshift-marketplace/certified-operators-xxqfx" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.177092 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xxqfx" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.459600 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xxqfx"] Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.597484 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-677cd87946-f626n"] Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.598251 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-677cd87946-f626n" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.601174 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.602937 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.603295 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.603770 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.603770 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.604260 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.614477 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-677cd87946-f626n"] Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.617131 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.734060 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5-client-ca\") pod \"controller-manager-677cd87946-f626n\" (UID: \"aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5\") " pod="openshift-controller-manager/controller-manager-677cd87946-f626n" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.734120 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dttl\" (UniqueName: \"kubernetes.io/projected/aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5-kube-api-access-6dttl\") pod \"controller-manager-677cd87946-f626n\" (UID: \"aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5\") " pod="openshift-controller-manager/controller-manager-677cd87946-f626n" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.734179 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5-proxy-ca-bundles\") pod \"controller-manager-677cd87946-f626n\" (UID: \"aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5\") " pod="openshift-controller-manager/controller-manager-677cd87946-f626n" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.734209 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5-config\") pod \"controller-manager-677cd87946-f626n\" (UID: \"aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5\") " pod="openshift-controller-manager/controller-manager-677cd87946-f626n" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.734289 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5-serving-cert\") pod \"controller-manager-677cd87946-f626n\" (UID: \"aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5\") " pod="openshift-controller-manager/controller-manager-677cd87946-f626n" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.835752 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5-client-ca\") pod \"controller-manager-677cd87946-f626n\" (UID: \"aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5\") " pod="openshift-controller-manager/controller-manager-677cd87946-f626n" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.835831 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dttl\" (UniqueName: \"kubernetes.io/projected/aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5-kube-api-access-6dttl\") pod \"controller-manager-677cd87946-f626n\" (UID: \"aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5\") " pod="openshift-controller-manager/controller-manager-677cd87946-f626n" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.835869 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5-proxy-ca-bundles\") pod \"controller-manager-677cd87946-f626n\" (UID: \"aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5\") " pod="openshift-controller-manager/controller-manager-677cd87946-f626n" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.835906 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5-config\") pod \"controller-manager-677cd87946-f626n\" (UID: \"aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5\") " pod="openshift-controller-manager/controller-manager-677cd87946-f626n" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.835942 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5-serving-cert\") pod \"controller-manager-677cd87946-f626n\" (UID: \"aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5\") " pod="openshift-controller-manager/controller-manager-677cd87946-f626n" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.836761 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5-client-ca\") pod \"controller-manager-677cd87946-f626n\" (UID: \"aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5\") " pod="openshift-controller-manager/controller-manager-677cd87946-f626n" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.837669 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5-proxy-ca-bundles\") pod \"controller-manager-677cd87946-f626n\" (UID: \"aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5\") " pod="openshift-controller-manager/controller-manager-677cd87946-f626n" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.838166 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5-config\") pod \"controller-manager-677cd87946-f626n\" (UID: \"aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5\") " pod="openshift-controller-manager/controller-manager-677cd87946-f626n" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.847232 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5-serving-cert\") pod \"controller-manager-677cd87946-f626n\" (UID: \"aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5\") " pod="openshift-controller-manager/controller-manager-677cd87946-f626n" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.854717 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dttl\" (UniqueName: \"kubernetes.io/projected/aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5-kube-api-access-6dttl\") pod \"controller-manager-677cd87946-f626n\" (UID: \"aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5\") " pod="openshift-controller-manager/controller-manager-677cd87946-f626n" Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.880831 4886 generic.go:334] "Generic (PLEG): container finished" podID="989aa724-d476-4df1-9849-22c3acf90103" containerID="36d5065e4e64fcaa2186ef0a87edb16ad3691cd29b5d9b89a5cb5d97e0736337" exitCode=0 Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.880901 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnld7" event={"ID":"989aa724-d476-4df1-9849-22c3acf90103","Type":"ContainerDied","Data":"36d5065e4e64fcaa2186ef0a87edb16ad3691cd29b5d9b89a5cb5d97e0736337"} Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.880929 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnld7" event={"ID":"989aa724-d476-4df1-9849-22c3acf90103","Type":"ContainerStarted","Data":"e51046365ad976ba7863c2df133c838ea53510e30abb4d18f411fe68a385673f"} Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.884695 4886 generic.go:334] "Generic (PLEG): container finished" podID="eebd68ba-8223-48ca-ad2a-fdc786eedad2" containerID="23798f52e2fe0d7c638c9068cb713da84c5c9ba3d5b0f6cf87ff64bb3092d54a" exitCode=0 Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.884789 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99mqn" event={"ID":"eebd68ba-8223-48ca-ad2a-fdc786eedad2","Type":"ContainerDied","Data":"23798f52e2fe0d7c638c9068cb713da84c5c9ba3d5b0f6cf87ff64bb3092d54a"} Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.888607 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrdmp" event={"ID":"be8c5521-0d01-4afe-9a3e-7ee2ba8d014c","Type":"ContainerStarted","Data":"85c333de9ec3f6dfb8dd674d1272ea6031bba34533b8dcbd3d09558ab2ba80b5"} Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.890580 4886 generic.go:334] "Generic (PLEG): container finished" podID="c92403c4-f071-44a9-a322-4e849ae93c8c" containerID="28348df221b266392312f2e3d3c8354366a5825209c6c639d24b42897c2e973e" exitCode=0 Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.890603 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xxqfx" event={"ID":"c92403c4-f071-44a9-a322-4e849ae93c8c","Type":"ContainerDied","Data":"28348df221b266392312f2e3d3c8354366a5825209c6c639d24b42897c2e973e"} Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.890619 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xxqfx" event={"ID":"c92403c4-f071-44a9-a322-4e849ae93c8c","Type":"ContainerStarted","Data":"e4a2d43353563027682ac34c4e098aff685378d4269c74ff56d5a3ad17949aa5"} Feb 19 21:04:49 crc kubenswrapper[4886]: I0219 21:04:49.931975 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xrdmp" podStartSLOduration=3.448293554 podStartE2EDuration="4.931959823s" podCreationTimestamp="2026-02-19 21:04:45 +0000 UTC" firstStartedPulling="2026-02-19 21:04:47.857314219 +0000 UTC m=+318.485157319" lastFinishedPulling="2026-02-19 21:04:49.340980538 +0000 UTC m=+319.968823588" observedRunningTime="2026-02-19 21:04:49.928393463 +0000 UTC m=+320.556236533" watchObservedRunningTime="2026-02-19 21:04:49.931959823 +0000 UTC m=+320.559802883" Feb 19 21:04:50 crc kubenswrapper[4886]: I0219 21:04:50.043243 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-677cd87946-f626n" Feb 19 21:04:50 crc kubenswrapper[4886]: I0219 21:04:50.235596 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-677cd87946-f626n"] Feb 19 21:04:50 crc kubenswrapper[4886]: I0219 21:04:50.612128 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebaebd34-645c-4d4e-8554-604dc8b1beac" path="/var/lib/kubelet/pods/ebaebd34-645c-4d4e-8554-604dc8b1beac/volumes" Feb 19 21:04:50 crc kubenswrapper[4886]: I0219 21:04:50.850995 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/adc7f4d1-4f6b-4a8a-843e-119a248a1e17-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-txj4p\" (UID: \"adc7f4d1-4f6b-4a8a-843e-119a248a1e17\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" Feb 19 21:04:50 crc kubenswrapper[4886]: I0219 21:04:50.872162 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/adc7f4d1-4f6b-4a8a-843e-119a248a1e17-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-txj4p\" (UID: \"adc7f4d1-4f6b-4a8a-843e-119a248a1e17\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" Feb 19 21:04:50 crc kubenswrapper[4886]: I0219 21:04:50.876462 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" Feb 19 21:04:50 crc kubenswrapper[4886]: I0219 21:04:50.912909 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnld7" event={"ID":"989aa724-d476-4df1-9849-22c3acf90103","Type":"ContainerStarted","Data":"376ff1f460616b84cecb7e40b6f69895496b6e93d57eef686d9b3bd070644543"} Feb 19 21:04:50 crc kubenswrapper[4886]: I0219 21:04:50.918467 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99mqn" event={"ID":"eebd68ba-8223-48ca-ad2a-fdc786eedad2","Type":"ContainerStarted","Data":"6166cca7764b37eeefd9d9c2b9f21483a0e352eecdb3b4e932441e9bfec8756a"} Feb 19 21:04:50 crc kubenswrapper[4886]: I0219 21:04:50.926083 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xxqfx" event={"ID":"c92403c4-f071-44a9-a322-4e849ae93c8c","Type":"ContainerStarted","Data":"ab8d24815c2f732471e1fdbe30d25dc00559c9a4b6eec5959392b0d2a237b0e2"} Feb 19 21:04:50 crc kubenswrapper[4886]: I0219 21:04:50.932919 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-677cd87946-f626n" event={"ID":"aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5","Type":"ContainerStarted","Data":"d371f61a178e26c17bce16ae393bd4c85644e36e71fa8993630153e17f52375d"} Feb 19 21:04:50 crc kubenswrapper[4886]: I0219 21:04:50.932975 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-677cd87946-f626n" event={"ID":"aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5","Type":"ContainerStarted","Data":"3848d7ab3aed775b664ccc1216a90f028800085cd68ac43b55ea55ea764a6a94"} Feb 19 21:04:50 crc kubenswrapper[4886]: I0219 21:04:50.932995 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-677cd87946-f626n" Feb 19 21:04:50 crc kubenswrapper[4886]: I0219 21:04:50.942794 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-677cd87946-f626n" Feb 19 21:04:50 crc kubenswrapper[4886]: I0219 21:04:50.954875 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-99mqn" podStartSLOduration=2.527602102 podStartE2EDuration="4.954857325s" podCreationTimestamp="2026-02-19 21:04:46 +0000 UTC" firstStartedPulling="2026-02-19 21:04:47.857876464 +0000 UTC m=+318.485719554" lastFinishedPulling="2026-02-19 21:04:50.285131717 +0000 UTC m=+320.912974777" observedRunningTime="2026-02-19 21:04:50.953098891 +0000 UTC m=+321.580941941" watchObservedRunningTime="2026-02-19 21:04:50.954857325 +0000 UTC m=+321.582700385" Feb 19 21:04:51 crc kubenswrapper[4886]: I0219 21:04:51.039750 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-677cd87946-f626n" podStartSLOduration=4.039732473 podStartE2EDuration="4.039732473s" podCreationTimestamp="2026-02-19 21:04:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:04:51.003524271 +0000 UTC m=+321.631367331" watchObservedRunningTime="2026-02-19 21:04:51.039732473 +0000 UTC m=+321.667575523" Feb 19 21:04:51 crc kubenswrapper[4886]: I0219 21:04:51.166232 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p"] Feb 19 21:04:51 crc kubenswrapper[4886]: W0219 21:04:51.171746 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadc7f4d1_4f6b_4a8a_843e_119a248a1e17.slice/crio-c7b5aa6458435bf3dc8b0ab85a673fdd2156e95c6388a97ddade8008f5a208af WatchSource:0}: Error finding container c7b5aa6458435bf3dc8b0ab85a673fdd2156e95c6388a97ddade8008f5a208af: Status 404 returned error can't find the container with id c7b5aa6458435bf3dc8b0ab85a673fdd2156e95c6388a97ddade8008f5a208af Feb 19 21:04:51 crc kubenswrapper[4886]: I0219 21:04:51.939222 4886 generic.go:334] "Generic (PLEG): container finished" podID="c92403c4-f071-44a9-a322-4e849ae93c8c" containerID="ab8d24815c2f732471e1fdbe30d25dc00559c9a4b6eec5959392b0d2a237b0e2" exitCode=0 Feb 19 21:04:51 crc kubenswrapper[4886]: I0219 21:04:51.939298 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xxqfx" event={"ID":"c92403c4-f071-44a9-a322-4e849ae93c8c","Type":"ContainerDied","Data":"ab8d24815c2f732471e1fdbe30d25dc00559c9a4b6eec5959392b0d2a237b0e2"} Feb 19 21:04:51 crc kubenswrapper[4886]: I0219 21:04:51.942216 4886 generic.go:334] "Generic (PLEG): container finished" podID="989aa724-d476-4df1-9849-22c3acf90103" containerID="376ff1f460616b84cecb7e40b6f69895496b6e93d57eef686d9b3bd070644543" exitCode=0 Feb 19 21:04:51 crc kubenswrapper[4886]: I0219 21:04:51.942363 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnld7" event={"ID":"989aa724-d476-4df1-9849-22c3acf90103","Type":"ContainerDied","Data":"376ff1f460616b84cecb7e40b6f69895496b6e93d57eef686d9b3bd070644543"} Feb 19 21:04:51 crc kubenswrapper[4886]: I0219 21:04:51.945768 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" event={"ID":"adc7f4d1-4f6b-4a8a-843e-119a248a1e17","Type":"ContainerStarted","Data":"c7b5aa6458435bf3dc8b0ab85a673fdd2156e95c6388a97ddade8008f5a208af"} Feb 19 21:04:53 crc kubenswrapper[4886]: I0219 21:04:53.958350 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xxqfx" event={"ID":"c92403c4-f071-44a9-a322-4e849ae93c8c","Type":"ContainerStarted","Data":"5763e52a481a4947dffd7bdbda0d3b38cc83c3913c787523f66d3e23cb2998b1"} Feb 19 21:04:53 crc kubenswrapper[4886]: I0219 21:04:53.960894 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnld7" event={"ID":"989aa724-d476-4df1-9849-22c3acf90103","Type":"ContainerStarted","Data":"da026257875548ad1faaabf4490400e506816a0494afebcda48e594dc032a6e5"} Feb 19 21:04:53 crc kubenswrapper[4886]: I0219 21:04:53.963014 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" event={"ID":"adc7f4d1-4f6b-4a8a-843e-119a248a1e17","Type":"ContainerStarted","Data":"00e9543f388337de040f8037d1d90e02b98d02a17892a15941af030c83dea11f"} Feb 19 21:04:53 crc kubenswrapper[4886]: I0219 21:04:53.963972 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" Feb 19 21:04:53 crc kubenswrapper[4886]: I0219 21:04:53.970641 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" Feb 19 21:04:53 crc kubenswrapper[4886]: I0219 21:04:53.986913 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xxqfx" podStartSLOduration=3.122911041 podStartE2EDuration="5.986894205s" podCreationTimestamp="2026-02-19 21:04:48 +0000 UTC" firstStartedPulling="2026-02-19 21:04:49.891777421 +0000 UTC m=+320.519620471" lastFinishedPulling="2026-02-19 21:04:52.755760575 +0000 UTC m=+323.383603635" observedRunningTime="2026-02-19 21:04:53.983929732 +0000 UTC m=+324.611772802" watchObservedRunningTime="2026-02-19 21:04:53.986894205 +0000 UTC m=+324.614737255" Feb 19 21:04:54 crc kubenswrapper[4886]: I0219 21:04:54.004790 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" podStartSLOduration=18.484480573 podStartE2EDuration="20.004771314s" podCreationTimestamp="2026-02-19 21:04:34 +0000 UTC" firstStartedPulling="2026-02-19 21:04:51.174240381 +0000 UTC m=+321.802083431" lastFinishedPulling="2026-02-19 21:04:52.694531122 +0000 UTC m=+323.322374172" observedRunningTime="2026-02-19 21:04:54.001574795 +0000 UTC m=+324.629417855" watchObservedRunningTime="2026-02-19 21:04:54.004771314 +0000 UTC m=+324.632614374" Feb 19 21:04:54 crc kubenswrapper[4886]: I0219 21:04:54.034700 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wnld7" podStartSLOduration=3.223407602 podStartE2EDuration="6.034686548s" podCreationTimestamp="2026-02-19 21:04:48 +0000 UTC" firstStartedPulling="2026-02-19 21:04:49.882416835 +0000 UTC m=+320.510259885" lastFinishedPulling="2026-02-19 21:04:52.693695781 +0000 UTC m=+323.321538831" observedRunningTime="2026-02-19 21:04:54.034251938 +0000 UTC m=+324.662094978" watchObservedRunningTime="2026-02-19 21:04:54.034686548 +0000 UTC m=+324.662529598" Feb 19 21:04:55 crc kubenswrapper[4886]: I0219 21:04:55.030192 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-hlq45"] Feb 19 21:04:55 crc kubenswrapper[4886]: I0219 21:04:55.032479 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-hlq45" Feb 19 21:04:55 crc kubenswrapper[4886]: I0219 21:04:55.036026 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 19 21:04:55 crc kubenswrapper[4886]: I0219 21:04:55.036052 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 19 21:04:55 crc kubenswrapper[4886]: I0219 21:04:55.036447 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-b94tg" Feb 19 21:04:55 crc kubenswrapper[4886]: I0219 21:04:55.036563 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 19 21:04:55 crc kubenswrapper[4886]: I0219 21:04:55.038208 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-hlq45"] Feb 19 21:04:55 crc kubenswrapper[4886]: I0219 21:04:55.209389 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8fb1439-35ca-4e5e-9c83-444df2f2b0c0-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-hlq45\" (UID: \"b8fb1439-35ca-4e5e-9c83-444df2f2b0c0\") " pod="openshift-monitoring/prometheus-operator-db54df47d-hlq45" Feb 19 21:04:55 crc kubenswrapper[4886]: I0219 21:04:55.209448 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rthdj\" (UniqueName: \"kubernetes.io/projected/b8fb1439-35ca-4e5e-9c83-444df2f2b0c0-kube-api-access-rthdj\") pod \"prometheus-operator-db54df47d-hlq45\" (UID: \"b8fb1439-35ca-4e5e-9c83-444df2f2b0c0\") " pod="openshift-monitoring/prometheus-operator-db54df47d-hlq45" Feb 19 21:04:55 crc kubenswrapper[4886]: I0219 21:04:55.209491 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b8fb1439-35ca-4e5e-9c83-444df2f2b0c0-metrics-client-ca\") pod \"prometheus-operator-db54df47d-hlq45\" (UID: \"b8fb1439-35ca-4e5e-9c83-444df2f2b0c0\") " pod="openshift-monitoring/prometheus-operator-db54df47d-hlq45" Feb 19 21:04:55 crc kubenswrapper[4886]: I0219 21:04:55.209544 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b8fb1439-35ca-4e5e-9c83-444df2f2b0c0-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-hlq45\" (UID: \"b8fb1439-35ca-4e5e-9c83-444df2f2b0c0\") " pod="openshift-monitoring/prometheus-operator-db54df47d-hlq45" Feb 19 21:04:55 crc kubenswrapper[4886]: I0219 21:04:55.310608 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b8fb1439-35ca-4e5e-9c83-444df2f2b0c0-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-hlq45\" (UID: \"b8fb1439-35ca-4e5e-9c83-444df2f2b0c0\") " pod="openshift-monitoring/prometheus-operator-db54df47d-hlq45" Feb 19 21:04:55 crc kubenswrapper[4886]: I0219 21:04:55.310716 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8fb1439-35ca-4e5e-9c83-444df2f2b0c0-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-hlq45\" (UID: \"b8fb1439-35ca-4e5e-9c83-444df2f2b0c0\") " pod="openshift-monitoring/prometheus-operator-db54df47d-hlq45" Feb 19 21:04:55 crc kubenswrapper[4886]: I0219 21:04:55.310745 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rthdj\" (UniqueName: \"kubernetes.io/projected/b8fb1439-35ca-4e5e-9c83-444df2f2b0c0-kube-api-access-rthdj\") pod \"prometheus-operator-db54df47d-hlq45\" (UID: \"b8fb1439-35ca-4e5e-9c83-444df2f2b0c0\") " pod="openshift-monitoring/prometheus-operator-db54df47d-hlq45" Feb 19 21:04:55 crc kubenswrapper[4886]: I0219 21:04:55.310817 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b8fb1439-35ca-4e5e-9c83-444df2f2b0c0-metrics-client-ca\") pod \"prometheus-operator-db54df47d-hlq45\" (UID: \"b8fb1439-35ca-4e5e-9c83-444df2f2b0c0\") " pod="openshift-monitoring/prometheus-operator-db54df47d-hlq45" Feb 19 21:04:55 crc kubenswrapper[4886]: I0219 21:04:55.311963 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b8fb1439-35ca-4e5e-9c83-444df2f2b0c0-metrics-client-ca\") pod \"prometheus-operator-db54df47d-hlq45\" (UID: \"b8fb1439-35ca-4e5e-9c83-444df2f2b0c0\") " pod="openshift-monitoring/prometheus-operator-db54df47d-hlq45" Feb 19 21:04:55 crc kubenswrapper[4886]: I0219 21:04:55.317893 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b8fb1439-35ca-4e5e-9c83-444df2f2b0c0-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-hlq45\" (UID: \"b8fb1439-35ca-4e5e-9c83-444df2f2b0c0\") " pod="openshift-monitoring/prometheus-operator-db54df47d-hlq45" Feb 19 21:04:55 crc kubenswrapper[4886]: I0219 21:04:55.323870 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8fb1439-35ca-4e5e-9c83-444df2f2b0c0-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-hlq45\" (UID: \"b8fb1439-35ca-4e5e-9c83-444df2f2b0c0\") " pod="openshift-monitoring/prometheus-operator-db54df47d-hlq45" Feb 19 21:04:55 crc kubenswrapper[4886]: I0219 21:04:55.331516 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rthdj\" (UniqueName: \"kubernetes.io/projected/b8fb1439-35ca-4e5e-9c83-444df2f2b0c0-kube-api-access-rthdj\") pod \"prometheus-operator-db54df47d-hlq45\" (UID: \"b8fb1439-35ca-4e5e-9c83-444df2f2b0c0\") " pod="openshift-monitoring/prometheus-operator-db54df47d-hlq45" Feb 19 21:04:55 crc kubenswrapper[4886]: I0219 21:04:55.347687 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-hlq45" Feb 19 21:04:55 crc kubenswrapper[4886]: I0219 21:04:55.788850 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-hlq45"] Feb 19 21:04:55 crc kubenswrapper[4886]: W0219 21:04:55.798343 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8fb1439_35ca_4e5e_9c83_444df2f2b0c0.slice/crio-b36be31eb4b25270ff841e863ee8ef9cd7ee021b654b86030724edc3eee17ae6 WatchSource:0}: Error finding container b36be31eb4b25270ff841e863ee8ef9cd7ee021b654b86030724edc3eee17ae6: Status 404 returned error can't find the container with id b36be31eb4b25270ff841e863ee8ef9cd7ee021b654b86030724edc3eee17ae6 Feb 19 21:04:55 crc kubenswrapper[4886]: I0219 21:04:55.977198 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-hlq45" event={"ID":"b8fb1439-35ca-4e5e-9c83-444df2f2b0c0","Type":"ContainerStarted","Data":"b36be31eb4b25270ff841e863ee8ef9cd7ee021b654b86030724edc3eee17ae6"} Feb 19 21:04:56 crc kubenswrapper[4886]: I0219 21:04:56.122136 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xrdmp" Feb 19 21:04:56 crc kubenswrapper[4886]: I0219 21:04:56.122184 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xrdmp" Feb 19 21:04:56 crc kubenswrapper[4886]: I0219 21:04:56.180096 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xrdmp" Feb 19 21:04:56 crc kubenswrapper[4886]: I0219 21:04:56.717915 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-99mqn" Feb 19 21:04:56 crc kubenswrapper[4886]: I0219 21:04:56.718502 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-99mqn" Feb 19 21:04:57 crc kubenswrapper[4886]: I0219 21:04:57.055648 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xrdmp" Feb 19 21:04:57 crc kubenswrapper[4886]: I0219 21:04:57.778297 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-99mqn" podUID="eebd68ba-8223-48ca-ad2a-fdc786eedad2" containerName="registry-server" probeResult="failure" output=< Feb 19 21:04:57 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 21:04:57 crc kubenswrapper[4886]: > Feb 19 21:04:58 crc kubenswrapper[4886]: I0219 21:04:58.544286 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wnld7" Feb 19 21:04:58 crc kubenswrapper[4886]: I0219 21:04:58.544421 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wnld7" Feb 19 21:04:58 crc kubenswrapper[4886]: I0219 21:04:58.618087 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wnld7" Feb 19 21:04:59 crc kubenswrapper[4886]: I0219 21:04:59.054640 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wnld7" Feb 19 21:04:59 crc kubenswrapper[4886]: I0219 21:04:59.178359 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xxqfx" Feb 19 21:04:59 crc kubenswrapper[4886]: I0219 21:04:59.178414 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xxqfx" Feb 19 21:04:59 crc kubenswrapper[4886]: I0219 21:04:59.247101 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xxqfx" Feb 19 21:05:00 crc kubenswrapper[4886]: I0219 21:05:00.004719 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-hlq45" event={"ID":"b8fb1439-35ca-4e5e-9c83-444df2f2b0c0","Type":"ContainerStarted","Data":"98bbc7e260cc1c3e66541e220f60456f3147a34e3568a2306d66bde331efbad1"} Feb 19 21:05:00 crc kubenswrapper[4886]: I0219 21:05:00.004816 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-hlq45" event={"ID":"b8fb1439-35ca-4e5e-9c83-444df2f2b0c0","Type":"ContainerStarted","Data":"8736c0cdb6e0d8691306831b25c1ab69d78af2b4dad99df6e313b1119d060488"} Feb 19 21:05:00 crc kubenswrapper[4886]: I0219 21:05:00.040488 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-hlq45" podStartSLOduration=1.832843262 podStartE2EDuration="5.040460161s" podCreationTimestamp="2026-02-19 21:04:55 +0000 UTC" firstStartedPulling="2026-02-19 21:04:55.800998599 +0000 UTC m=+326.428841689" lastFinishedPulling="2026-02-19 21:04:59.008615538 +0000 UTC m=+329.636458588" observedRunningTime="2026-02-19 21:05:00.02943746 +0000 UTC m=+330.657280550" watchObservedRunningTime="2026-02-19 21:05:00.040460161 +0000 UTC m=+330.668303251" Feb 19 21:05:00 crc kubenswrapper[4886]: I0219 21:05:00.077119 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xxqfx" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.421702 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-r7s4n"] Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.422664 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-r7s4n" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.426799 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.427047 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.427063 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-vgcmm" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.439777 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-r7s4n"] Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.466769 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8"] Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.467809 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.473642 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-9vlx7" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.473913 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.474051 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.478946 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.479441 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-98s62"] Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.480317 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.483497 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8"] Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.489842 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-tt2h5" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.490059 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.490193 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.516657 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/17323040-462f-4430-b42b-bc6e8e20696c-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-r7s4n\" (UID: \"17323040-462f-4430-b42b-bc6e8e20696c\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r7s4n" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.516749 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4bn7\" (UniqueName: \"kubernetes.io/projected/17323040-462f-4430-b42b-bc6e8e20696c-kube-api-access-l4bn7\") pod \"openshift-state-metrics-566fddb674-r7s4n\" (UID: \"17323040-462f-4430-b42b-bc6e8e20696c\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r7s4n" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.516785 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/17323040-462f-4430-b42b-bc6e8e20696c-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-r7s4n\" (UID: \"17323040-462f-4430-b42b-bc6e8e20696c\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r7s4n" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.516813 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/17323040-462f-4430-b42b-bc6e8e20696c-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-r7s4n\" (UID: \"17323040-462f-4430-b42b-bc6e8e20696c\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r7s4n" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.617824 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5ca92e37-b6e6-49f7-ba19-fc0f0981431a-metrics-client-ca\") pod \"node-exporter-98s62\" (UID: \"5ca92e37-b6e6-49f7-ba19-fc0f0981431a\") " pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.617880 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/17323040-462f-4430-b42b-bc6e8e20696c-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-r7s4n\" (UID: \"17323040-462f-4430-b42b-bc6e8e20696c\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r7s4n" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.617901 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4bn7\" (UniqueName: \"kubernetes.io/projected/17323040-462f-4430-b42b-bc6e8e20696c-kube-api-access-l4bn7\") pod \"openshift-state-metrics-566fddb674-r7s4n\" (UID: \"17323040-462f-4430-b42b-bc6e8e20696c\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r7s4n" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.617954 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/5ca92e37-b6e6-49f7-ba19-fc0f0981431a-node-exporter-wtmp\") pod \"node-exporter-98s62\" (UID: \"5ca92e37-b6e6-49f7-ba19-fc0f0981431a\") " pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.617983 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/5ca92e37-b6e6-49f7-ba19-fc0f0981431a-root\") pod \"node-exporter-98s62\" (UID: \"5ca92e37-b6e6-49f7-ba19-fc0f0981431a\") " pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.618002 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs66r\" (UniqueName: \"kubernetes.io/projected/5ca92e37-b6e6-49f7-ba19-fc0f0981431a-kube-api-access-rs66r\") pod \"node-exporter-98s62\" (UID: \"5ca92e37-b6e6-49f7-ba19-fc0f0981431a\") " pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.618019 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/17323040-462f-4430-b42b-bc6e8e20696c-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-r7s4n\" (UID: \"17323040-462f-4430-b42b-bc6e8e20696c\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r7s4n" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.618043 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6829dc62-279c-4c90-aa6d-0a0c11e5a7b1-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-nnsh8\" (UID: \"6829dc62-279c-4c90-aa6d-0a0c11e5a7b1\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.618084 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6829dc62-279c-4c90-aa6d-0a0c11e5a7b1-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-nnsh8\" (UID: \"6829dc62-279c-4c90-aa6d-0a0c11e5a7b1\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.618107 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6829dc62-279c-4c90-aa6d-0a0c11e5a7b1-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-nnsh8\" (UID: \"6829dc62-279c-4c90-aa6d-0a0c11e5a7b1\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.618121 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5ca92e37-b6e6-49f7-ba19-fc0f0981431a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-98s62\" (UID: \"5ca92e37-b6e6-49f7-ba19-fc0f0981431a\") " pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.618141 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/17323040-462f-4430-b42b-bc6e8e20696c-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-r7s4n\" (UID: \"17323040-462f-4430-b42b-bc6e8e20696c\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r7s4n" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.618160 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6829dc62-279c-4c90-aa6d-0a0c11e5a7b1-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-nnsh8\" (UID: \"6829dc62-279c-4c90-aa6d-0a0c11e5a7b1\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.618182 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/5ca92e37-b6e6-49f7-ba19-fc0f0981431a-node-exporter-textfile\") pod \"node-exporter-98s62\" (UID: \"5ca92e37-b6e6-49f7-ba19-fc0f0981431a\") " pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.618249 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5ca92e37-b6e6-49f7-ba19-fc0f0981431a-sys\") pod \"node-exporter-98s62\" (UID: \"5ca92e37-b6e6-49f7-ba19-fc0f0981431a\") " pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.618386 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/6829dc62-279c-4c90-aa6d-0a0c11e5a7b1-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-nnsh8\" (UID: \"6829dc62-279c-4c90-aa6d-0a0c11e5a7b1\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.618408 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqnk7\" (UniqueName: \"kubernetes.io/projected/6829dc62-279c-4c90-aa6d-0a0c11e5a7b1-kube-api-access-lqnk7\") pod \"kube-state-metrics-777cb5bd5d-nnsh8\" (UID: \"6829dc62-279c-4c90-aa6d-0a0c11e5a7b1\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.618478 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/5ca92e37-b6e6-49f7-ba19-fc0f0981431a-node-exporter-tls\") pod \"node-exporter-98s62\" (UID: \"5ca92e37-b6e6-49f7-ba19-fc0f0981431a\") " pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.618993 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/17323040-462f-4430-b42b-bc6e8e20696c-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-r7s4n\" (UID: \"17323040-462f-4430-b42b-bc6e8e20696c\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r7s4n" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.637155 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/17323040-462f-4430-b42b-bc6e8e20696c-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-r7s4n\" (UID: \"17323040-462f-4430-b42b-bc6e8e20696c\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r7s4n" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.637159 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/17323040-462f-4430-b42b-bc6e8e20696c-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-r7s4n\" (UID: \"17323040-462f-4430-b42b-bc6e8e20696c\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r7s4n" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.639344 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4bn7\" (UniqueName: \"kubernetes.io/projected/17323040-462f-4430-b42b-bc6e8e20696c-kube-api-access-l4bn7\") pod \"openshift-state-metrics-566fddb674-r7s4n\" (UID: \"17323040-462f-4430-b42b-bc6e8e20696c\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r7s4n" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.720064 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/5ca92e37-b6e6-49f7-ba19-fc0f0981431a-node-exporter-tls\") pod \"node-exporter-98s62\" (UID: \"5ca92e37-b6e6-49f7-ba19-fc0f0981431a\") " pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.720171 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5ca92e37-b6e6-49f7-ba19-fc0f0981431a-metrics-client-ca\") pod \"node-exporter-98s62\" (UID: \"5ca92e37-b6e6-49f7-ba19-fc0f0981431a\") " pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.720207 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/5ca92e37-b6e6-49f7-ba19-fc0f0981431a-node-exporter-wtmp\") pod \"node-exporter-98s62\" (UID: \"5ca92e37-b6e6-49f7-ba19-fc0f0981431a\") " pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.720226 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/5ca92e37-b6e6-49f7-ba19-fc0f0981431a-root\") pod \"node-exporter-98s62\" (UID: \"5ca92e37-b6e6-49f7-ba19-fc0f0981431a\") " pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.720254 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs66r\" (UniqueName: \"kubernetes.io/projected/5ca92e37-b6e6-49f7-ba19-fc0f0981431a-kube-api-access-rs66r\") pod \"node-exporter-98s62\" (UID: \"5ca92e37-b6e6-49f7-ba19-fc0f0981431a\") " pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.720295 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6829dc62-279c-4c90-aa6d-0a0c11e5a7b1-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-nnsh8\" (UID: \"6829dc62-279c-4c90-aa6d-0a0c11e5a7b1\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.720342 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6829dc62-279c-4c90-aa6d-0a0c11e5a7b1-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-nnsh8\" (UID: \"6829dc62-279c-4c90-aa6d-0a0c11e5a7b1\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.720359 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5ca92e37-b6e6-49f7-ba19-fc0f0981431a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-98s62\" (UID: \"5ca92e37-b6e6-49f7-ba19-fc0f0981431a\") " pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.720377 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6829dc62-279c-4c90-aa6d-0a0c11e5a7b1-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-nnsh8\" (UID: \"6829dc62-279c-4c90-aa6d-0a0c11e5a7b1\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.720409 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6829dc62-279c-4c90-aa6d-0a0c11e5a7b1-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-nnsh8\" (UID: \"6829dc62-279c-4c90-aa6d-0a0c11e5a7b1\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.720432 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/5ca92e37-b6e6-49f7-ba19-fc0f0981431a-node-exporter-textfile\") pod \"node-exporter-98s62\" (UID: \"5ca92e37-b6e6-49f7-ba19-fc0f0981431a\") " pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.720458 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5ca92e37-b6e6-49f7-ba19-fc0f0981431a-sys\") pod \"node-exporter-98s62\" (UID: \"5ca92e37-b6e6-49f7-ba19-fc0f0981431a\") " pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.720489 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqnk7\" (UniqueName: \"kubernetes.io/projected/6829dc62-279c-4c90-aa6d-0a0c11e5a7b1-kube-api-access-lqnk7\") pod \"kube-state-metrics-777cb5bd5d-nnsh8\" (UID: \"6829dc62-279c-4c90-aa6d-0a0c11e5a7b1\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.720506 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/6829dc62-279c-4c90-aa6d-0a0c11e5a7b1-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-nnsh8\" (UID: \"6829dc62-279c-4c90-aa6d-0a0c11e5a7b1\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.720945 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/5ca92e37-b6e6-49f7-ba19-fc0f0981431a-root\") pod \"node-exporter-98s62\" (UID: \"5ca92e37-b6e6-49f7-ba19-fc0f0981431a\") " pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.721054 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/6829dc62-279c-4c90-aa6d-0a0c11e5a7b1-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-nnsh8\" (UID: \"6829dc62-279c-4c90-aa6d-0a0c11e5a7b1\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.721524 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6829dc62-279c-4c90-aa6d-0a0c11e5a7b1-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-nnsh8\" (UID: \"6829dc62-279c-4c90-aa6d-0a0c11e5a7b1\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.721788 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6829dc62-279c-4c90-aa6d-0a0c11e5a7b1-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-nnsh8\" (UID: \"6829dc62-279c-4c90-aa6d-0a0c11e5a7b1\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.722317 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/5ca92e37-b6e6-49f7-ba19-fc0f0981431a-node-exporter-wtmp\") pod \"node-exporter-98s62\" (UID: \"5ca92e37-b6e6-49f7-ba19-fc0f0981431a\") " pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.723075 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/5ca92e37-b6e6-49f7-ba19-fc0f0981431a-node-exporter-textfile\") pod \"node-exporter-98s62\" (UID: \"5ca92e37-b6e6-49f7-ba19-fc0f0981431a\") " pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.724134 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5ca92e37-b6e6-49f7-ba19-fc0f0981431a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-98s62\" (UID: \"5ca92e37-b6e6-49f7-ba19-fc0f0981431a\") " pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.724322 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/5ca92e37-b6e6-49f7-ba19-fc0f0981431a-node-exporter-tls\") pod \"node-exporter-98s62\" (UID: \"5ca92e37-b6e6-49f7-ba19-fc0f0981431a\") " pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.725160 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6829dc62-279c-4c90-aa6d-0a0c11e5a7b1-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-nnsh8\" (UID: \"6829dc62-279c-4c90-aa6d-0a0c11e5a7b1\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.727625 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6829dc62-279c-4c90-aa6d-0a0c11e5a7b1-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-nnsh8\" (UID: \"6829dc62-279c-4c90-aa6d-0a0c11e5a7b1\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.730709 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5ca92e37-b6e6-49f7-ba19-fc0f0981431a-metrics-client-ca\") pod \"node-exporter-98s62\" (UID: \"5ca92e37-b6e6-49f7-ba19-fc0f0981431a\") " pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.736304 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqnk7\" (UniqueName: \"kubernetes.io/projected/6829dc62-279c-4c90-aa6d-0a0c11e5a7b1-kube-api-access-lqnk7\") pod \"kube-state-metrics-777cb5bd5d-nnsh8\" (UID: \"6829dc62-279c-4c90-aa6d-0a0c11e5a7b1\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.736473 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-r7s4n" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.739386 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5ca92e37-b6e6-49f7-ba19-fc0f0981431a-sys\") pod \"node-exporter-98s62\" (UID: \"5ca92e37-b6e6-49f7-ba19-fc0f0981431a\") " pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.763685 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs66r\" (UniqueName: \"kubernetes.io/projected/5ca92e37-b6e6-49f7-ba19-fc0f0981431a-kube-api-access-rs66r\") pod \"node-exporter-98s62\" (UID: \"5ca92e37-b6e6-49f7-ba19-fc0f0981431a\") " pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.781180 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8" Feb 19 21:05:02 crc kubenswrapper[4886]: I0219 21:05:02.803643 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-98s62" Feb 19 21:05:02 crc kubenswrapper[4886]: W0219 21:05:02.834249 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ca92e37_b6e6_49f7_ba19_fc0f0981431a.slice/crio-888215987bef3bb421805e9ec8a51efc4d2d1531daad9ca1fc1b419451293749 WatchSource:0}: Error finding container 888215987bef3bb421805e9ec8a51efc4d2d1531daad9ca1fc1b419451293749: Status 404 returned error can't find the container with id 888215987bef3bb421805e9ec8a51efc4d2d1531daad9ca1fc1b419451293749 Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.042155 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-98s62" event={"ID":"5ca92e37-b6e6-49f7-ba19-fc0f0981431a","Type":"ContainerStarted","Data":"888215987bef3bb421805e9ec8a51efc4d2d1531daad9ca1fc1b419451293749"} Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.214572 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-r7s4n"] Feb 19 21:05:03 crc kubenswrapper[4886]: W0219 21:05:03.221199 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17323040_462f_4430_b42b_bc6e8e20696c.slice/crio-5c09ae022cde1597634260df559fcd838f3421a6791c1fda0c7eb0462e56a50c WatchSource:0}: Error finding container 5c09ae022cde1597634260df559fcd838f3421a6791c1fda0c7eb0462e56a50c: Status 404 returned error can't find the container with id 5c09ae022cde1597634260df559fcd838f3421a6791c1fda0c7eb0462e56a50c Feb 19 21:05:03 crc kubenswrapper[4886]: W0219 21:05:03.355524 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6829dc62_279c_4c90_aa6d_0a0c11e5a7b1.slice/crio-c9da90cf3c33d83700a3c6c81140fe6e80a754ffc3c13db8dfe0b7d7693b6ecc WatchSource:0}: Error finding container c9da90cf3c33d83700a3c6c81140fe6e80a754ffc3c13db8dfe0b7d7693b6ecc: Status 404 returned error can't find the container with id c9da90cf3c33d83700a3c6c81140fe6e80a754ffc3c13db8dfe0b7d7693b6ecc Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.357602 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8"] Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.492050 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.494134 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.495623 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.498972 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.498988 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.499024 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.499119 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.499178 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.499214 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.499240 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-hw868" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.508227 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.567367 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.631252 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/286cbb27-d565-4ea4-80e2-9187c247185f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.631318 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/286cbb27-d565-4ea4-80e2-9187c247185f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.631348 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/286cbb27-d565-4ea4-80e2-9187c247185f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.631372 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/286cbb27-d565-4ea4-80e2-9187c247185f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.631565 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/286cbb27-d565-4ea4-80e2-9187c247185f-config-volume\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.631611 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/286cbb27-d565-4ea4-80e2-9187c247185f-config-out\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.631660 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/286cbb27-d565-4ea4-80e2-9187c247185f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.631770 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/286cbb27-d565-4ea4-80e2-9187c247185f-web-config\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.631846 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/286cbb27-d565-4ea4-80e2-9187c247185f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.631892 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/286cbb27-d565-4ea4-80e2-9187c247185f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.631912 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxhfs\" (UniqueName: \"kubernetes.io/projected/286cbb27-d565-4ea4-80e2-9187c247185f-kube-api-access-kxhfs\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.631984 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/286cbb27-d565-4ea4-80e2-9187c247185f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.733076 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/286cbb27-d565-4ea4-80e2-9187c247185f-web-config\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.733143 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/286cbb27-d565-4ea4-80e2-9187c247185f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.733177 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/286cbb27-d565-4ea4-80e2-9187c247185f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.733200 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxhfs\" (UniqueName: \"kubernetes.io/projected/286cbb27-d565-4ea4-80e2-9187c247185f-kube-api-access-kxhfs\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.733239 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/286cbb27-d565-4ea4-80e2-9187c247185f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.733285 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/286cbb27-d565-4ea4-80e2-9187c247185f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.733311 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/286cbb27-d565-4ea4-80e2-9187c247185f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.733335 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/286cbb27-d565-4ea4-80e2-9187c247185f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.733357 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/286cbb27-d565-4ea4-80e2-9187c247185f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.733383 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/286cbb27-d565-4ea4-80e2-9187c247185f-config-volume\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.733405 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/286cbb27-d565-4ea4-80e2-9187c247185f-config-out\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.733432 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/286cbb27-d565-4ea4-80e2-9187c247185f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.735093 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/286cbb27-d565-4ea4-80e2-9187c247185f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.735354 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/286cbb27-d565-4ea4-80e2-9187c247185f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.736906 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/286cbb27-d565-4ea4-80e2-9187c247185f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.740362 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/286cbb27-d565-4ea4-80e2-9187c247185f-config-out\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.740487 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/286cbb27-d565-4ea4-80e2-9187c247185f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.740936 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/286cbb27-d565-4ea4-80e2-9187c247185f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.748981 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/286cbb27-d565-4ea4-80e2-9187c247185f-web-config\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.749380 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/286cbb27-d565-4ea4-80e2-9187c247185f-config-volume\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.749480 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/286cbb27-d565-4ea4-80e2-9187c247185f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.750755 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/286cbb27-d565-4ea4-80e2-9187c247185f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.751089 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/286cbb27-d565-4ea4-80e2-9187c247185f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.765026 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxhfs\" (UniqueName: \"kubernetes.io/projected/286cbb27-d565-4ea4-80e2-9187c247185f-kube-api-access-kxhfs\") pod \"alertmanager-main-0\" (UID: \"286cbb27-d565-4ea4-80e2-9187c247185f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:03 crc kubenswrapper[4886]: I0219 21:05:03.810023 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.052633 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-r7s4n" event={"ID":"17323040-462f-4430-b42b-bc6e8e20696c","Type":"ContainerStarted","Data":"2804915b5a40c47635875d95132047dc052e13114b437f8ced3145c0826309aa"} Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.052948 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-r7s4n" event={"ID":"17323040-462f-4430-b42b-bc6e8e20696c","Type":"ContainerStarted","Data":"ea56287233a2ccc9a608f774d1823c704cf69a675ff38a0f2b1016436e0b813c"} Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.053001 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-r7s4n" event={"ID":"17323040-462f-4430-b42b-bc6e8e20696c","Type":"ContainerStarted","Data":"5c09ae022cde1597634260df559fcd838f3421a6791c1fda0c7eb0462e56a50c"} Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.053704 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8" event={"ID":"6829dc62-279c-4c90-aa6d-0a0c11e5a7b1","Type":"ContainerStarted","Data":"c9da90cf3c33d83700a3c6c81140fe6e80a754ffc3c13db8dfe0b7d7693b6ecc"} Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.284104 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.510700 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-bc79bc97-87qbv"] Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.517767 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.520224 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.525902 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-dueq367qc9kvs" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.526456 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.526535 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.532568 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.534319 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.535006 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-rqgv8" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.553880 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-bc79bc97-87qbv"] Feb 19 21:05:04 crc kubenswrapper[4886]: W0219 21:05:04.595200 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod286cbb27_d565_4ea4_80e2_9187c247185f.slice/crio-edc27f005da85e01bc07b3ba1593526329065b5fa6b9947a49249a8fc9e157a1 WatchSource:0}: Error finding container edc27f005da85e01bc07b3ba1593526329065b5fa6b9947a49249a8fc9e157a1: Status 404 returned error can't find the container with id edc27f005da85e01bc07b3ba1593526329065b5fa6b9947a49249a8fc9e157a1 Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.652294 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/e0c1d73e-0ad8-46cc-afbc-d19899896bdd-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-bc79bc97-87qbv\" (UID: \"e0c1d73e-0ad8-46cc-afbc-d19899896bdd\") " pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.652349 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e0c1d73e-0ad8-46cc-afbc-d19899896bdd-secret-grpc-tls\") pod \"thanos-querier-bc79bc97-87qbv\" (UID: \"e0c1d73e-0ad8-46cc-afbc-d19899896bdd\") " pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.652405 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e0c1d73e-0ad8-46cc-afbc-d19899896bdd-metrics-client-ca\") pod \"thanos-querier-bc79bc97-87qbv\" (UID: \"e0c1d73e-0ad8-46cc-afbc-d19899896bdd\") " pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.652438 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e0c1d73e-0ad8-46cc-afbc-d19899896bdd-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-bc79bc97-87qbv\" (UID: \"e0c1d73e-0ad8-46cc-afbc-d19899896bdd\") " pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.652480 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxp6n\" (UniqueName: \"kubernetes.io/projected/e0c1d73e-0ad8-46cc-afbc-d19899896bdd-kube-api-access-qxp6n\") pod \"thanos-querier-bc79bc97-87qbv\" (UID: \"e0c1d73e-0ad8-46cc-afbc-d19899896bdd\") " pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.652525 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/e0c1d73e-0ad8-46cc-afbc-d19899896bdd-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-bc79bc97-87qbv\" (UID: \"e0c1d73e-0ad8-46cc-afbc-d19899896bdd\") " pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.652553 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/e0c1d73e-0ad8-46cc-afbc-d19899896bdd-secret-thanos-querier-tls\") pod \"thanos-querier-bc79bc97-87qbv\" (UID: \"e0c1d73e-0ad8-46cc-afbc-d19899896bdd\") " pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.652580 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e0c1d73e-0ad8-46cc-afbc-d19899896bdd-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-bc79bc97-87qbv\" (UID: \"e0c1d73e-0ad8-46cc-afbc-d19899896bdd\") " pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.754272 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e0c1d73e-0ad8-46cc-afbc-d19899896bdd-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-bc79bc97-87qbv\" (UID: \"e0c1d73e-0ad8-46cc-afbc-d19899896bdd\") " pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.754321 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxp6n\" (UniqueName: \"kubernetes.io/projected/e0c1d73e-0ad8-46cc-afbc-d19899896bdd-kube-api-access-qxp6n\") pod \"thanos-querier-bc79bc97-87qbv\" (UID: \"e0c1d73e-0ad8-46cc-afbc-d19899896bdd\") " pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.754357 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/e0c1d73e-0ad8-46cc-afbc-d19899896bdd-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-bc79bc97-87qbv\" (UID: \"e0c1d73e-0ad8-46cc-afbc-d19899896bdd\") " pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.754375 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/e0c1d73e-0ad8-46cc-afbc-d19899896bdd-secret-thanos-querier-tls\") pod \"thanos-querier-bc79bc97-87qbv\" (UID: \"e0c1d73e-0ad8-46cc-afbc-d19899896bdd\") " pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.754394 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e0c1d73e-0ad8-46cc-afbc-d19899896bdd-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-bc79bc97-87qbv\" (UID: \"e0c1d73e-0ad8-46cc-afbc-d19899896bdd\") " pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.754424 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/e0c1d73e-0ad8-46cc-afbc-d19899896bdd-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-bc79bc97-87qbv\" (UID: \"e0c1d73e-0ad8-46cc-afbc-d19899896bdd\") " pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.754441 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e0c1d73e-0ad8-46cc-afbc-d19899896bdd-secret-grpc-tls\") pod \"thanos-querier-bc79bc97-87qbv\" (UID: \"e0c1d73e-0ad8-46cc-afbc-d19899896bdd\") " pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.754474 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e0c1d73e-0ad8-46cc-afbc-d19899896bdd-metrics-client-ca\") pod \"thanos-querier-bc79bc97-87qbv\" (UID: \"e0c1d73e-0ad8-46cc-afbc-d19899896bdd\") " pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.755236 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e0c1d73e-0ad8-46cc-afbc-d19899896bdd-metrics-client-ca\") pod \"thanos-querier-bc79bc97-87qbv\" (UID: \"e0c1d73e-0ad8-46cc-afbc-d19899896bdd\") " pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.761661 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/e0c1d73e-0ad8-46cc-afbc-d19899896bdd-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-bc79bc97-87qbv\" (UID: \"e0c1d73e-0ad8-46cc-afbc-d19899896bdd\") " pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.764782 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e0c1d73e-0ad8-46cc-afbc-d19899896bdd-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-bc79bc97-87qbv\" (UID: \"e0c1d73e-0ad8-46cc-afbc-d19899896bdd\") " pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.765093 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/e0c1d73e-0ad8-46cc-afbc-d19899896bdd-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-bc79bc97-87qbv\" (UID: \"e0c1d73e-0ad8-46cc-afbc-d19899896bdd\") " pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.765371 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e0c1d73e-0ad8-46cc-afbc-d19899896bdd-secret-grpc-tls\") pod \"thanos-querier-bc79bc97-87qbv\" (UID: \"e0c1d73e-0ad8-46cc-afbc-d19899896bdd\") " pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.766253 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/e0c1d73e-0ad8-46cc-afbc-d19899896bdd-secret-thanos-querier-tls\") pod \"thanos-querier-bc79bc97-87qbv\" (UID: \"e0c1d73e-0ad8-46cc-afbc-d19899896bdd\") " pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.775972 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e0c1d73e-0ad8-46cc-afbc-d19899896bdd-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-bc79bc97-87qbv\" (UID: \"e0c1d73e-0ad8-46cc-afbc-d19899896bdd\") " pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.779859 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxp6n\" (UniqueName: \"kubernetes.io/projected/e0c1d73e-0ad8-46cc-afbc-d19899896bdd-kube-api-access-qxp6n\") pod \"thanos-querier-bc79bc97-87qbv\" (UID: \"e0c1d73e-0ad8-46cc-afbc-d19899896bdd\") " pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:04 crc kubenswrapper[4886]: I0219 21:05:04.842105 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:05 crc kubenswrapper[4886]: I0219 21:05:05.066339 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"286cbb27-d565-4ea4-80e2-9187c247185f","Type":"ContainerStarted","Data":"edc27f005da85e01bc07b3ba1593526329065b5fa6b9947a49249a8fc9e157a1"} Feb 19 21:05:05 crc kubenswrapper[4886]: I0219 21:05:05.071510 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-98s62" event={"ID":"5ca92e37-b6e6-49f7-ba19-fc0f0981431a","Type":"ContainerStarted","Data":"d8afe52add4e500f320af6d342fd449227e3673e2f916b65aecd3aa99bae71de"} Feb 19 21:05:05 crc kubenswrapper[4886]: I0219 21:05:05.254154 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-bc79bc97-87qbv"] Feb 19 21:05:05 crc kubenswrapper[4886]: W0219 21:05:05.258025 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0c1d73e_0ad8_46cc_afbc_d19899896bdd.slice/crio-12668a4fd61994000811a8ba3b15325ddf5ec37aaa6734fbe50e86e2ba58a682 WatchSource:0}: Error finding container 12668a4fd61994000811a8ba3b15325ddf5ec37aaa6734fbe50e86e2ba58a682: Status 404 returned error can't find the container with id 12668a4fd61994000811a8ba3b15325ddf5ec37aaa6734fbe50e86e2ba58a682 Feb 19 21:05:06 crc kubenswrapper[4886]: I0219 21:05:06.078511 4886 generic.go:334] "Generic (PLEG): container finished" podID="5ca92e37-b6e6-49f7-ba19-fc0f0981431a" containerID="d8afe52add4e500f320af6d342fd449227e3673e2f916b65aecd3aa99bae71de" exitCode=0 Feb 19 21:05:06 crc kubenswrapper[4886]: I0219 21:05:06.080773 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-98s62" event={"ID":"5ca92e37-b6e6-49f7-ba19-fc0f0981431a","Type":"ContainerDied","Data":"d8afe52add4e500f320af6d342fd449227e3673e2f916b65aecd3aa99bae71de"} Feb 19 21:05:06 crc kubenswrapper[4886]: I0219 21:05:06.083691 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" event={"ID":"e0c1d73e-0ad8-46cc-afbc-d19899896bdd","Type":"ContainerStarted","Data":"12668a4fd61994000811a8ba3b15325ddf5ec37aaa6734fbe50e86e2ba58a682"} Feb 19 21:05:06 crc kubenswrapper[4886]: I0219 21:05:06.549954 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" Feb 19 21:05:06 crc kubenswrapper[4886]: I0219 21:05:06.610963 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tp6s4"] Feb 19 21:05:06 crc kubenswrapper[4886]: I0219 21:05:06.755438 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-99mqn" Feb 19 21:05:06 crc kubenswrapper[4886]: I0219 21:05:06.816410 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-99mqn" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.097813 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"286cbb27-d565-4ea4-80e2-9187c247185f","Type":"ContainerStarted","Data":"ff1f05ca816674f3c8aa47b942a4688d83249ffebbf70b54c2535a0c65066b34"} Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.102116 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-98s62" event={"ID":"5ca92e37-b6e6-49f7-ba19-fc0f0981431a","Type":"ContainerStarted","Data":"41b937b8d2c6e0934ca3f436b59d627691f6278057d2b343323265a09f12b6f0"} Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.106220 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-r7s4n" event={"ID":"17323040-462f-4430-b42b-bc6e8e20696c","Type":"ContainerStarted","Data":"ff80fc039076389c20af6b9ffa4b0fc7ecdfc0ed977d2a1dde3a732c3ea6947c"} Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.111304 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8" event={"ID":"6829dc62-279c-4c90-aa6d-0a0c11e5a7b1","Type":"ContainerStarted","Data":"436924ac7c1ad9dadea1a757e360b34164fa3c84613500d864ad5b32cafe5fd9"} Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.154529 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-r7s4n" podStartSLOduration=2.342713464 podStartE2EDuration="5.154509701s" podCreationTimestamp="2026-02-19 21:05:02 +0000 UTC" firstStartedPulling="2026-02-19 21:05:04.032349081 +0000 UTC m=+334.660192131" lastFinishedPulling="2026-02-19 21:05:06.844145318 +0000 UTC m=+337.471988368" observedRunningTime="2026-02-19 21:05:07.150988965 +0000 UTC m=+337.778832015" watchObservedRunningTime="2026-02-19 21:05:07.154509701 +0000 UTC m=+337.782352761" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.250067 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-557fd44cbd-chqz4"] Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.251120 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.267044 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-557fd44cbd-chqz4"] Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.396282 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc106b01-da5f-4c08-a35f-5c83e6c2114d-trusted-ca-bundle\") pod \"console-557fd44cbd-chqz4\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.396375 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dc106b01-da5f-4c08-a35f-5c83e6c2114d-console-oauth-config\") pod \"console-557fd44cbd-chqz4\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.396401 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dc106b01-da5f-4c08-a35f-5c83e6c2114d-console-config\") pod \"console-557fd44cbd-chqz4\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.396441 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dc106b01-da5f-4c08-a35f-5c83e6c2114d-console-serving-cert\") pod \"console-557fd44cbd-chqz4\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.396570 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dc106b01-da5f-4c08-a35f-5c83e6c2114d-service-ca\") pod \"console-557fd44cbd-chqz4\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.396636 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dc106b01-da5f-4c08-a35f-5c83e6c2114d-oauth-serving-cert\") pod \"console-557fd44cbd-chqz4\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.396661 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf4gr\" (UniqueName: \"kubernetes.io/projected/dc106b01-da5f-4c08-a35f-5c83e6c2114d-kube-api-access-jf4gr\") pod \"console-557fd44cbd-chqz4\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.498253 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dc106b01-da5f-4c08-a35f-5c83e6c2114d-console-oauth-config\") pod \"console-557fd44cbd-chqz4\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.498308 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dc106b01-da5f-4c08-a35f-5c83e6c2114d-console-config\") pod \"console-557fd44cbd-chqz4\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.498343 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dc106b01-da5f-4c08-a35f-5c83e6c2114d-console-serving-cert\") pod \"console-557fd44cbd-chqz4\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.498370 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dc106b01-da5f-4c08-a35f-5c83e6c2114d-service-ca\") pod \"console-557fd44cbd-chqz4\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.498395 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dc106b01-da5f-4c08-a35f-5c83e6c2114d-oauth-serving-cert\") pod \"console-557fd44cbd-chqz4\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.498420 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf4gr\" (UniqueName: \"kubernetes.io/projected/dc106b01-da5f-4c08-a35f-5c83e6c2114d-kube-api-access-jf4gr\") pod \"console-557fd44cbd-chqz4\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.498478 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc106b01-da5f-4c08-a35f-5c83e6c2114d-trusted-ca-bundle\") pod \"console-557fd44cbd-chqz4\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.499706 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc106b01-da5f-4c08-a35f-5c83e6c2114d-trusted-ca-bundle\") pod \"console-557fd44cbd-chqz4\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.500364 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dc106b01-da5f-4c08-a35f-5c83e6c2114d-console-config\") pod \"console-557fd44cbd-chqz4\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.500704 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dc106b01-da5f-4c08-a35f-5c83e6c2114d-service-ca\") pod \"console-557fd44cbd-chqz4\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.501003 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dc106b01-da5f-4c08-a35f-5c83e6c2114d-oauth-serving-cert\") pod \"console-557fd44cbd-chqz4\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.506430 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dc106b01-da5f-4c08-a35f-5c83e6c2114d-console-serving-cert\") pod \"console-557fd44cbd-chqz4\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.507375 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dc106b01-da5f-4c08-a35f-5c83e6c2114d-console-oauth-config\") pod \"console-557fd44cbd-chqz4\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.517626 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf4gr\" (UniqueName: \"kubernetes.io/projected/dc106b01-da5f-4c08-a35f-5c83e6c2114d-kube-api-access-jf4gr\") pod \"console-557fd44cbd-chqz4\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.575415 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.771389 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-799cc74bc-wv5f6"] Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.772563 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.774927 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-799cc74bc-wv5f6"] Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.777601 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.777771 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.777973 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-ah9rauavkf6ud" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.778078 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.778284 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-9zr2v" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.778394 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.918609 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a55e7bbd-33a0-46c7-b08b-bf71421bd1bf-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-799cc74bc-wv5f6\" (UID: \"a55e7bbd-33a0-46c7-b08b-bf71421bd1bf\") " pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.918677 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a55e7bbd-33a0-46c7-b08b-bf71421bd1bf-secret-metrics-client-certs\") pod \"metrics-server-799cc74bc-wv5f6\" (UID: \"a55e7bbd-33a0-46c7-b08b-bf71421bd1bf\") " pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.918753 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/a55e7bbd-33a0-46c7-b08b-bf71421bd1bf-secret-metrics-server-tls\") pod \"metrics-server-799cc74bc-wv5f6\" (UID: \"a55e7bbd-33a0-46c7-b08b-bf71421bd1bf\") " pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.918793 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a55e7bbd-33a0-46c7-b08b-bf71421bd1bf-client-ca-bundle\") pod \"metrics-server-799cc74bc-wv5f6\" (UID: \"a55e7bbd-33a0-46c7-b08b-bf71421bd1bf\") " pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.918814 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/a55e7bbd-33a0-46c7-b08b-bf71421bd1bf-audit-log\") pod \"metrics-server-799cc74bc-wv5f6\" (UID: \"a55e7bbd-33a0-46c7-b08b-bf71421bd1bf\") " pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.918961 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/a55e7bbd-33a0-46c7-b08b-bf71421bd1bf-metrics-server-audit-profiles\") pod \"metrics-server-799cc74bc-wv5f6\" (UID: \"a55e7bbd-33a0-46c7-b08b-bf71421bd1bf\") " pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:07 crc kubenswrapper[4886]: I0219 21:05:07.918987 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2dhz\" (UniqueName: \"kubernetes.io/projected/a55e7bbd-33a0-46c7-b08b-bf71421bd1bf-kube-api-access-j2dhz\") pod \"metrics-server-799cc74bc-wv5f6\" (UID: \"a55e7bbd-33a0-46c7-b08b-bf71421bd1bf\") " pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.020118 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a55e7bbd-33a0-46c7-b08b-bf71421bd1bf-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-799cc74bc-wv5f6\" (UID: \"a55e7bbd-33a0-46c7-b08b-bf71421bd1bf\") " pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.020206 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a55e7bbd-33a0-46c7-b08b-bf71421bd1bf-secret-metrics-client-certs\") pod \"metrics-server-799cc74bc-wv5f6\" (UID: \"a55e7bbd-33a0-46c7-b08b-bf71421bd1bf\") " pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.020379 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/a55e7bbd-33a0-46c7-b08b-bf71421bd1bf-secret-metrics-server-tls\") pod \"metrics-server-799cc74bc-wv5f6\" (UID: \"a55e7bbd-33a0-46c7-b08b-bf71421bd1bf\") " pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.020407 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a55e7bbd-33a0-46c7-b08b-bf71421bd1bf-client-ca-bundle\") pod \"metrics-server-799cc74bc-wv5f6\" (UID: \"a55e7bbd-33a0-46c7-b08b-bf71421bd1bf\") " pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.020947 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/a55e7bbd-33a0-46c7-b08b-bf71421bd1bf-audit-log\") pod \"metrics-server-799cc74bc-wv5f6\" (UID: \"a55e7bbd-33a0-46c7-b08b-bf71421bd1bf\") " pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.020989 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/a55e7bbd-33a0-46c7-b08b-bf71421bd1bf-metrics-server-audit-profiles\") pod \"metrics-server-799cc74bc-wv5f6\" (UID: \"a55e7bbd-33a0-46c7-b08b-bf71421bd1bf\") " pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.021005 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2dhz\" (UniqueName: \"kubernetes.io/projected/a55e7bbd-33a0-46c7-b08b-bf71421bd1bf-kube-api-access-j2dhz\") pod \"metrics-server-799cc74bc-wv5f6\" (UID: \"a55e7bbd-33a0-46c7-b08b-bf71421bd1bf\") " pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.021233 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a55e7bbd-33a0-46c7-b08b-bf71421bd1bf-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-799cc74bc-wv5f6\" (UID: \"a55e7bbd-33a0-46c7-b08b-bf71421bd1bf\") " pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.021382 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/a55e7bbd-33a0-46c7-b08b-bf71421bd1bf-audit-log\") pod \"metrics-server-799cc74bc-wv5f6\" (UID: \"a55e7bbd-33a0-46c7-b08b-bf71421bd1bf\") " pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.022105 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/a55e7bbd-33a0-46c7-b08b-bf71421bd1bf-metrics-server-audit-profiles\") pod \"metrics-server-799cc74bc-wv5f6\" (UID: \"a55e7bbd-33a0-46c7-b08b-bf71421bd1bf\") " pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.024090 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a55e7bbd-33a0-46c7-b08b-bf71421bd1bf-secret-metrics-client-certs\") pod \"metrics-server-799cc74bc-wv5f6\" (UID: \"a55e7bbd-33a0-46c7-b08b-bf71421bd1bf\") " pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.024112 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a55e7bbd-33a0-46c7-b08b-bf71421bd1bf-client-ca-bundle\") pod \"metrics-server-799cc74bc-wv5f6\" (UID: \"a55e7bbd-33a0-46c7-b08b-bf71421bd1bf\") " pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.028684 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/a55e7bbd-33a0-46c7-b08b-bf71421bd1bf-secret-metrics-server-tls\") pod \"metrics-server-799cc74bc-wv5f6\" (UID: \"a55e7bbd-33a0-46c7-b08b-bf71421bd1bf\") " pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.036806 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2dhz\" (UniqueName: \"kubernetes.io/projected/a55e7bbd-33a0-46c7-b08b-bf71421bd1bf-kube-api-access-j2dhz\") pod \"metrics-server-799cc74bc-wv5f6\" (UID: \"a55e7bbd-33a0-46c7-b08b-bf71421bd1bf\") " pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.062892 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-557fd44cbd-chqz4"] Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.098449 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.119000 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8" event={"ID":"6829dc62-279c-4c90-aa6d-0a0c11e5a7b1","Type":"ContainerStarted","Data":"d1b2b09b7df4387a2c44d5fb267bb1207f98cb8ff09b15d04e878e58d6815508"} Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.119055 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8" event={"ID":"6829dc62-279c-4c90-aa6d-0a0c11e5a7b1","Type":"ContainerStarted","Data":"b4b03845d0c7d7bbf345e28ffdc068e1af6d58e549aa83a0472f465f963c8d29"} Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.121657 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"286cbb27-d565-4ea4-80e2-9187c247185f","Type":"ContainerDied","Data":"ff1f05ca816674f3c8aa47b942a4688d83249ffebbf70b54c2535a0c65066b34"} Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.121604 4886 generic.go:334] "Generic (PLEG): container finished" podID="286cbb27-d565-4ea4-80e2-9187c247185f" containerID="ff1f05ca816674f3c8aa47b942a4688d83249ffebbf70b54c2535a0c65066b34" exitCode=0 Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.125380 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-98s62" event={"ID":"5ca92e37-b6e6-49f7-ba19-fc0f0981431a","Type":"ContainerStarted","Data":"30bc1c6d903ac74e2ade103f1f719465fa854b1fd79f2285724c64e6dec53a58"} Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.161799 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-nnsh8" podStartSLOduration=2.676112771 podStartE2EDuration="6.161779369s" podCreationTimestamp="2026-02-19 21:05:02 +0000 UTC" firstStartedPulling="2026-02-19 21:05:03.359156667 +0000 UTC m=+333.986999727" lastFinishedPulling="2026-02-19 21:05:06.844823285 +0000 UTC m=+337.472666325" observedRunningTime="2026-02-19 21:05:08.137637906 +0000 UTC m=+338.765480956" watchObservedRunningTime="2026-02-19 21:05:08.161779369 +0000 UTC m=+338.789622519" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.217019 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-98s62" podStartSLOduration=4.414361523 podStartE2EDuration="6.216866662s" podCreationTimestamp="2026-02-19 21:05:02 +0000 UTC" firstStartedPulling="2026-02-19 21:05:02.851750855 +0000 UTC m=+333.479593905" lastFinishedPulling="2026-02-19 21:05:04.654255994 +0000 UTC m=+335.282099044" observedRunningTime="2026-02-19 21:05:08.214055673 +0000 UTC m=+338.841898723" watchObservedRunningTime="2026-02-19 21:05:08.216866662 +0000 UTC m=+338.844709722" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.230752 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-6b66dd58b-2rt7q"] Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.231495 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6b66dd58b-2rt7q" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.233310 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.238329 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.251393 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6b66dd58b-2rt7q"] Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.324301 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/773be00e-ec8f-4ad1-b356-5d80fda75835-monitoring-plugin-cert\") pod \"monitoring-plugin-6b66dd58b-2rt7q\" (UID: \"773be00e-ec8f-4ad1-b356-5d80fda75835\") " pod="openshift-monitoring/monitoring-plugin-6b66dd58b-2rt7q" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.431709 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/773be00e-ec8f-4ad1-b356-5d80fda75835-monitoring-plugin-cert\") pod \"monitoring-plugin-6b66dd58b-2rt7q\" (UID: \"773be00e-ec8f-4ad1-b356-5d80fda75835\") " pod="openshift-monitoring/monitoring-plugin-6b66dd58b-2rt7q" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.447768 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/773be00e-ec8f-4ad1-b356-5d80fda75835-monitoring-plugin-cert\") pod \"monitoring-plugin-6b66dd58b-2rt7q\" (UID: \"773be00e-ec8f-4ad1-b356-5d80fda75835\") " pod="openshift-monitoring/monitoring-plugin-6b66dd58b-2rt7q" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.550897 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6b66dd58b-2rt7q" Feb 19 21:05:08 crc kubenswrapper[4886]: W0219 21:05:08.695481 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc106b01_da5f_4c08_a35f_5c83e6c2114d.slice/crio-be0cc2e6d1777d0bc6aff0af2471842f89d07ba556efc081e1eadd19eebc6036 WatchSource:0}: Error finding container be0cc2e6d1777d0bc6aff0af2471842f89d07ba556efc081e1eadd19eebc6036: Status 404 returned error can't find the container with id be0cc2e6d1777d0bc6aff0af2471842f89d07ba556efc081e1eadd19eebc6036 Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.800169 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.802006 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.813902 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.814124 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.814241 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-2hdbv" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.814318 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.814249 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.814882 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.814971 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.815461 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.815555 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-en84fs1mc8f9h" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.815630 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.815798 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.816609 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.817949 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.874826 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.941865 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bdt5\" (UniqueName: \"kubernetes.io/projected/a30f0477-38c1-4a41-a633-81628dbab75a-kube-api-access-6bdt5\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.942139 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a30f0477-38c1-4a41-a633-81628dbab75a-config-out\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.942161 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/a30f0477-38c1-4a41-a633-81628dbab75a-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.942184 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.942205 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.943223 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.943304 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a30f0477-38c1-4a41-a633-81628dbab75a-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.943332 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a30f0477-38c1-4a41-a633-81628dbab75a-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.943349 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a30f0477-38c1-4a41-a633-81628dbab75a-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.943366 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.943383 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.943407 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-web-config\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.943422 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a30f0477-38c1-4a41-a633-81628dbab75a-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.943441 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-config\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.943457 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.943480 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.943494 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a30f0477-38c1-4a41-a633-81628dbab75a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:08 crc kubenswrapper[4886]: I0219 21:05:08.943514 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a30f0477-38c1-4a41-a633-81628dbab75a-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.044937 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.044976 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.045019 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-web-config\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.045037 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a30f0477-38c1-4a41-a633-81628dbab75a-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.045056 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-config\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.045070 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.045091 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.045110 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a30f0477-38c1-4a41-a633-81628dbab75a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.045131 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a30f0477-38c1-4a41-a633-81628dbab75a-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.045154 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bdt5\" (UniqueName: \"kubernetes.io/projected/a30f0477-38c1-4a41-a633-81628dbab75a-kube-api-access-6bdt5\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.045170 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a30f0477-38c1-4a41-a633-81628dbab75a-config-out\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.045188 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/a30f0477-38c1-4a41-a633-81628dbab75a-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.045221 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.045241 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.045273 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.045302 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a30f0477-38c1-4a41-a633-81628dbab75a-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.045320 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a30f0477-38c1-4a41-a633-81628dbab75a-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.045336 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a30f0477-38c1-4a41-a633-81628dbab75a-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: E0219 21:05:09.045418 4886 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: configmap "prometheus-k8s-rulefiles-0" not found Feb 19 21:05:09 crc kubenswrapper[4886]: E0219 21:05:09.045463 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a30f0477-38c1-4a41-a633-81628dbab75a-prometheus-k8s-rulefiles-0 podName:a30f0477-38c1-4a41-a633-81628dbab75a nodeName:}" failed. No retries permitted until 2026-02-19 21:05:09.545448062 +0000 UTC m=+340.173291102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/a30f0477-38c1-4a41-a633-81628dbab75a-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "a30f0477-38c1-4a41-a633-81628dbab75a") : configmap "prometheus-k8s-rulefiles-0" not found Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.048117 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a30f0477-38c1-4a41-a633-81628dbab75a-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.052048 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.053581 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/a30f0477-38c1-4a41-a633-81628dbab75a-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.054276 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a30f0477-38c1-4a41-a633-81628dbab75a-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.054509 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a30f0477-38c1-4a41-a633-81628dbab75a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.054856 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a30f0477-38c1-4a41-a633-81628dbab75a-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.057858 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.062234 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.062456 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.062581 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.063864 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a30f0477-38c1-4a41-a633-81628dbab75a-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.065108 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-web-config\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.067986 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bdt5\" (UniqueName: \"kubernetes.io/projected/a30f0477-38c1-4a41-a633-81628dbab75a-kube-api-access-6bdt5\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.068689 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.069954 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-config\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.070245 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a30f0477-38c1-4a41-a633-81628dbab75a-config-out\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.075271 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a30f0477-38c1-4a41-a633-81628dbab75a-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.136352 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-557fd44cbd-chqz4" event={"ID":"dc106b01-da5f-4c08-a35f-5c83e6c2114d","Type":"ContainerStarted","Data":"0cd1d6e33c1808ab49b0f3703aeda192986ff2ed21e06f63991cff67e4032ce8"} Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.137515 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-557fd44cbd-chqz4" event={"ID":"dc106b01-da5f-4c08-a35f-5c83e6c2114d","Type":"ContainerStarted","Data":"be0cc2e6d1777d0bc6aff0af2471842f89d07ba556efc081e1eadd19eebc6036"} Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.140183 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" event={"ID":"e0c1d73e-0ad8-46cc-afbc-d19899896bdd","Type":"ContainerStarted","Data":"006e7baf6aa89d692ba73cc76dd9ad1f9b6f13905c3b9c8ec850725ea8426ceb"} Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.155670 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-557fd44cbd-chqz4" podStartSLOduration=2.155654189 podStartE2EDuration="2.155654189s" podCreationTimestamp="2026-02-19 21:05:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:05:09.152166843 +0000 UTC m=+339.780009893" watchObservedRunningTime="2026-02-19 21:05:09.155654189 +0000 UTC m=+339.783497239" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.218287 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-799cc74bc-wv5f6"] Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.351271 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6b66dd58b-2rt7q"] Feb 19 21:05:09 crc kubenswrapper[4886]: W0219 21:05:09.359595 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod773be00e_ec8f_4ad1_b356_5d80fda75835.slice/crio-48b2c279ab0ae2efc4bf8462eb1b276938b18b080f5cce5854af4cb2537a4a41 WatchSource:0}: Error finding container 48b2c279ab0ae2efc4bf8462eb1b276938b18b080f5cce5854af4cb2537a4a41: Status 404 returned error can't find the container with id 48b2c279ab0ae2efc4bf8462eb1b276938b18b080f5cce5854af4cb2537a4a41 Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.554477 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a30f0477-38c1-4a41-a633-81628dbab75a-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.561004 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a30f0477-38c1-4a41-a633-81628dbab75a-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"a30f0477-38c1-4a41-a633-81628dbab75a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:09 crc kubenswrapper[4886]: I0219 21:05:09.783050 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:10 crc kubenswrapper[4886]: I0219 21:05:10.146513 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6b66dd58b-2rt7q" event={"ID":"773be00e-ec8f-4ad1-b356-5d80fda75835","Type":"ContainerStarted","Data":"48b2c279ab0ae2efc4bf8462eb1b276938b18b080f5cce5854af4cb2537a4a41"} Feb 19 21:05:10 crc kubenswrapper[4886]: I0219 21:05:10.147375 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" event={"ID":"a55e7bbd-33a0-46c7-b08b-bf71421bd1bf","Type":"ContainerStarted","Data":"5e1d2b99928324515d10506eff9cdb794b83c23835c482deed8dae8c96e38976"} Feb 19 21:05:10 crc kubenswrapper[4886]: I0219 21:05:10.158400 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" event={"ID":"e0c1d73e-0ad8-46cc-afbc-d19899896bdd","Type":"ContainerStarted","Data":"a5bcf6a5e6c3a178b1223499d1def0196dd4c02a3e0fe77346ff2c7bd9f7d3e9"} Feb 19 21:05:10 crc kubenswrapper[4886]: I0219 21:05:10.158463 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" event={"ID":"e0c1d73e-0ad8-46cc-afbc-d19899896bdd","Type":"ContainerStarted","Data":"544c2afc3f300dce2b36ff108ea55e0e02e50ed5f5a5c250ea8f12cb3e002b5f"} Feb 19 21:05:10 crc kubenswrapper[4886]: I0219 21:05:10.645323 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 19 21:05:10 crc kubenswrapper[4886]: W0219 21:05:10.653078 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda30f0477_38c1_4a41_a633_81628dbab75a.slice/crio-902ba6643b794c1655b4fc7917da5f9786f8677f8e6eea939c03ad5b2ba2f461 WatchSource:0}: Error finding container 902ba6643b794c1655b4fc7917da5f9786f8677f8e6eea939c03ad5b2ba2f461: Status 404 returned error can't find the container with id 902ba6643b794c1655b4fc7917da5f9786f8677f8e6eea939c03ad5b2ba2f461 Feb 19 21:05:11 crc kubenswrapper[4886]: I0219 21:05:11.167165 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"286cbb27-d565-4ea4-80e2-9187c247185f","Type":"ContainerStarted","Data":"55a5fcacc75cf2934f1dc68cfc85dc883c6b81dcbc34ab50c21c048417dec2c4"} Feb 19 21:05:11 crc kubenswrapper[4886]: I0219 21:05:11.167455 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"286cbb27-d565-4ea4-80e2-9187c247185f","Type":"ContainerStarted","Data":"7eba73ab12d26aa9f72cc1906efd3a2b37a1641b122a36fd71f25baf590b0818"} Feb 19 21:05:11 crc kubenswrapper[4886]: I0219 21:05:11.167467 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"286cbb27-d565-4ea4-80e2-9187c247185f","Type":"ContainerStarted","Data":"d656b38f0854bb39c30199ed2619ec7f5c456a0bee81577ab90cc803672ab151"} Feb 19 21:05:11 crc kubenswrapper[4886]: I0219 21:05:11.169658 4886 generic.go:334] "Generic (PLEG): container finished" podID="a30f0477-38c1-4a41-a633-81628dbab75a" containerID="ed467460aba796122afcce28fb6fd9ca60b5a9ec56297bbe55e3096e976257f5" exitCode=0 Feb 19 21:05:11 crc kubenswrapper[4886]: I0219 21:05:11.169703 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a30f0477-38c1-4a41-a633-81628dbab75a","Type":"ContainerDied","Data":"ed467460aba796122afcce28fb6fd9ca60b5a9ec56297bbe55e3096e976257f5"} Feb 19 21:05:11 crc kubenswrapper[4886]: I0219 21:05:11.169725 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a30f0477-38c1-4a41-a633-81628dbab75a","Type":"ContainerStarted","Data":"902ba6643b794c1655b4fc7917da5f9786f8677f8e6eea939c03ad5b2ba2f461"} Feb 19 21:05:12 crc kubenswrapper[4886]: I0219 21:05:12.183151 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6b66dd58b-2rt7q" event={"ID":"773be00e-ec8f-4ad1-b356-5d80fda75835","Type":"ContainerStarted","Data":"a5666df1d8f2cab3e491fb1059339b10bd590bdbf1f4d7f9b2f02b5cb526913a"} Feb 19 21:05:12 crc kubenswrapper[4886]: I0219 21:05:12.184908 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-6b66dd58b-2rt7q" Feb 19 21:05:12 crc kubenswrapper[4886]: I0219 21:05:12.187418 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" event={"ID":"a55e7bbd-33a0-46c7-b08b-bf71421bd1bf","Type":"ContainerStarted","Data":"fc38f9b1ac58fcaea104c158b4f031beac2385497d750e96b62357f29fd370f0"} Feb 19 21:05:12 crc kubenswrapper[4886]: I0219 21:05:12.192714 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"286cbb27-d565-4ea4-80e2-9187c247185f","Type":"ContainerStarted","Data":"3635d38eb5c10e65b8b1bd5c3b94b53f41433543f679ab09087eaec92ab5a977"} Feb 19 21:05:12 crc kubenswrapper[4886]: I0219 21:05:12.192754 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"286cbb27-d565-4ea4-80e2-9187c247185f","Type":"ContainerStarted","Data":"c13a7131caeb8848ffff451dee06717a70e5065955431adc4b7ec7b49774e86e"} Feb 19 21:05:12 crc kubenswrapper[4886]: I0219 21:05:12.193180 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-6b66dd58b-2rt7q" Feb 19 21:05:12 crc kubenswrapper[4886]: I0219 21:05:12.196444 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" event={"ID":"e0c1d73e-0ad8-46cc-afbc-d19899896bdd","Type":"ContainerStarted","Data":"a62db137245c61088b3dfff85d68915181947003cda3da1da7c9aa1d8c5fd4fc"} Feb 19 21:05:12 crc kubenswrapper[4886]: I0219 21:05:12.196488 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" event={"ID":"e0c1d73e-0ad8-46cc-afbc-d19899896bdd","Type":"ContainerStarted","Data":"7d4c82538c61e6f2bca69b76c5a149c8c325ae6bc23df477a9937e6f2589ec70"} Feb 19 21:05:12 crc kubenswrapper[4886]: I0219 21:05:12.201082 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-6b66dd58b-2rt7q" podStartSLOduration=1.883878524 podStartE2EDuration="4.201062864s" podCreationTimestamp="2026-02-19 21:05:08 +0000 UTC" firstStartedPulling="2026-02-19 21:05:09.362401376 +0000 UTC m=+339.990244426" lastFinishedPulling="2026-02-19 21:05:11.679585716 +0000 UTC m=+342.307428766" observedRunningTime="2026-02-19 21:05:12.198682635 +0000 UTC m=+342.826525685" watchObservedRunningTime="2026-02-19 21:05:12.201062864 +0000 UTC m=+342.828905914" Feb 19 21:05:12 crc kubenswrapper[4886]: I0219 21:05:12.219877 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" podStartSLOduration=2.788329817 podStartE2EDuration="5.219863225s" podCreationTimestamp="2026-02-19 21:05:07 +0000 UTC" firstStartedPulling="2026-02-19 21:05:09.248102979 +0000 UTC m=+339.875946029" lastFinishedPulling="2026-02-19 21:05:11.679636357 +0000 UTC m=+342.307479437" observedRunningTime="2026-02-19 21:05:12.217096007 +0000 UTC m=+342.844939067" watchObservedRunningTime="2026-02-19 21:05:12.219863225 +0000 UTC m=+342.847706275" Feb 19 21:05:13 crc kubenswrapper[4886]: I0219 21:05:13.216835 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"286cbb27-d565-4ea4-80e2-9187c247185f","Type":"ContainerStarted","Data":"b0d7a4cdb1f296c68ed0ecf8811fbafa442d142d55ab5e6ca50dfaf8a51c10d2"} Feb 19 21:05:13 crc kubenswrapper[4886]: I0219 21:05:13.227361 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" event={"ID":"e0c1d73e-0ad8-46cc-afbc-d19899896bdd","Type":"ContainerStarted","Data":"42d02131e9c06723e513da1b29fe284bdda771607550f00deeb7bfb379670ae2"} Feb 19 21:05:13 crc kubenswrapper[4886]: I0219 21:05:13.227635 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:13 crc kubenswrapper[4886]: I0219 21:05:13.258331 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=4.654853979 podStartE2EDuration="10.25831465s" podCreationTimestamp="2026-02-19 21:05:03 +0000 UTC" firstStartedPulling="2026-02-19 21:05:04.598491405 +0000 UTC m=+335.226334455" lastFinishedPulling="2026-02-19 21:05:10.201952076 +0000 UTC m=+340.829795126" observedRunningTime="2026-02-19 21:05:13.254092276 +0000 UTC m=+343.881935336" watchObservedRunningTime="2026-02-19 21:05:13.25831465 +0000 UTC m=+343.886157700" Feb 19 21:05:13 crc kubenswrapper[4886]: I0219 21:05:13.289916 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" podStartSLOduration=2.869891371 podStartE2EDuration="9.289896696s" podCreationTimestamp="2026-02-19 21:05:04 +0000 UTC" firstStartedPulling="2026-02-19 21:05:05.260017602 +0000 UTC m=+335.887860662" lastFinishedPulling="2026-02-19 21:05:11.680022937 +0000 UTC m=+342.307865987" observedRunningTime="2026-02-19 21:05:13.288811959 +0000 UTC m=+343.916655009" watchObservedRunningTime="2026-02-19 21:05:13.289896696 +0000 UTC m=+343.917739746" Feb 19 21:05:14 crc kubenswrapper[4886]: I0219 21:05:14.245575 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" Feb 19 21:05:16 crc kubenswrapper[4886]: I0219 21:05:16.253765 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a30f0477-38c1-4a41-a633-81628dbab75a","Type":"ContainerStarted","Data":"65d4b22cb7c7eee116ddcb25f9cd2955518bfd033442b28e914b1b19f86dc806"} Feb 19 21:05:17 crc kubenswrapper[4886]: I0219 21:05:17.263441 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a30f0477-38c1-4a41-a633-81628dbab75a","Type":"ContainerStarted","Data":"758c600ab78f13bdf7ae81b4c571b08366ecd5bbbbeaa0394ee56edde18dc551"} Feb 19 21:05:17 crc kubenswrapper[4886]: I0219 21:05:17.263838 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a30f0477-38c1-4a41-a633-81628dbab75a","Type":"ContainerStarted","Data":"b14eeaa25d45a68c920ee22c7ac58d11bff81a000e4db33c67f7c9ea4fff5217"} Feb 19 21:05:17 crc kubenswrapper[4886]: I0219 21:05:17.263848 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a30f0477-38c1-4a41-a633-81628dbab75a","Type":"ContainerStarted","Data":"0917b964c3021f7840bcbad970d45f1052750af1ebd148d3ea916c8e4541b706"} Feb 19 21:05:17 crc kubenswrapper[4886]: I0219 21:05:17.263857 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a30f0477-38c1-4a41-a633-81628dbab75a","Type":"ContainerStarted","Data":"4805c686c384ed93024cb93b3363450b686fb39f2404a3a1576799f764bfc377"} Feb 19 21:05:17 crc kubenswrapper[4886]: I0219 21:05:17.263867 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a30f0477-38c1-4a41-a633-81628dbab75a","Type":"ContainerStarted","Data":"2aaf5d46587cfc86dd3bb0359da4a675a9b798a87d7c32b9caa6739436a43f1f"} Feb 19 21:05:17 crc kubenswrapper[4886]: I0219 21:05:17.295935 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.484573997 podStartE2EDuration="9.295914073s" podCreationTimestamp="2026-02-19 21:05:08 +0000 UTC" firstStartedPulling="2026-02-19 21:05:11.172485062 +0000 UTC m=+341.800328152" lastFinishedPulling="2026-02-19 21:05:15.983825178 +0000 UTC m=+346.611668228" observedRunningTime="2026-02-19 21:05:17.291656449 +0000 UTC m=+347.919499509" watchObservedRunningTime="2026-02-19 21:05:17.295914073 +0000 UTC m=+347.923757133" Feb 19 21:05:17 crc kubenswrapper[4886]: I0219 21:05:17.576229 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:17 crc kubenswrapper[4886]: I0219 21:05:17.576498 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:17 crc kubenswrapper[4886]: I0219 21:05:17.584238 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:18 crc kubenswrapper[4886]: I0219 21:05:18.279646 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:05:18 crc kubenswrapper[4886]: I0219 21:05:18.325237 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:05:18 crc kubenswrapper[4886]: I0219 21:05:18.325376 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:05:18 crc kubenswrapper[4886]: I0219 21:05:18.362505 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-rn5ms"] Feb 19 21:05:19 crc kubenswrapper[4886]: I0219 21:05:19.784257 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:05:28 crc kubenswrapper[4886]: I0219 21:05:28.099514 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:28 crc kubenswrapper[4886]: I0219 21:05:28.100154 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:31 crc kubenswrapper[4886]: I0219 21:05:31.644783 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" podUID="7013b72c-2c60-4174-b7e9-a62de8263d50" containerName="registry" containerID="cri-o://d1ab94ca338e010d51fde28548ff9642099171bfdd1a498addb59ab6c3e79bb3" gracePeriod=30 Feb 19 21:05:32 crc kubenswrapper[4886]: I0219 21:05:32.375654 4886 generic.go:334] "Generic (PLEG): container finished" podID="7013b72c-2c60-4174-b7e9-a62de8263d50" containerID="d1ab94ca338e010d51fde28548ff9642099171bfdd1a498addb59ab6c3e79bb3" exitCode=0 Feb 19 21:05:32 crc kubenswrapper[4886]: I0219 21:05:32.375760 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" event={"ID":"7013b72c-2c60-4174-b7e9-a62de8263d50","Type":"ContainerDied","Data":"d1ab94ca338e010d51fde28548ff9642099171bfdd1a498addb59ab6c3e79bb3"} Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.399422 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" event={"ID":"7013b72c-2c60-4174-b7e9-a62de8263d50","Type":"ContainerDied","Data":"0c2b224cc92010b4fd5535886a68f3302d55d5c6eb4b0181fc9dc60941e27e89"} Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.399821 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c2b224cc92010b4fd5535886a68f3302d55d5c6eb4b0181fc9dc60941e27e89" Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.406911 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.517408 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7013b72c-2c60-4174-b7e9-a62de8263d50-registry-certificates\") pod \"7013b72c-2c60-4174-b7e9-a62de8263d50\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.517474 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7013b72c-2c60-4174-b7e9-a62de8263d50-installation-pull-secrets\") pod \"7013b72c-2c60-4174-b7e9-a62de8263d50\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.517582 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7013b72c-2c60-4174-b7e9-a62de8263d50-bound-sa-token\") pod \"7013b72c-2c60-4174-b7e9-a62de8263d50\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.517639 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7013b72c-2c60-4174-b7e9-a62de8263d50-registry-tls\") pod \"7013b72c-2c60-4174-b7e9-a62de8263d50\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.517667 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb2k9\" (UniqueName: \"kubernetes.io/projected/7013b72c-2c60-4174-b7e9-a62de8263d50-kube-api-access-zb2k9\") pod \"7013b72c-2c60-4174-b7e9-a62de8263d50\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.517830 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"7013b72c-2c60-4174-b7e9-a62de8263d50\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.517897 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7013b72c-2c60-4174-b7e9-a62de8263d50-ca-trust-extracted\") pod \"7013b72c-2c60-4174-b7e9-a62de8263d50\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.517930 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7013b72c-2c60-4174-b7e9-a62de8263d50-trusted-ca\") pod \"7013b72c-2c60-4174-b7e9-a62de8263d50\" (UID: \"7013b72c-2c60-4174-b7e9-a62de8263d50\") " Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.518740 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7013b72c-2c60-4174-b7e9-a62de8263d50-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "7013b72c-2c60-4174-b7e9-a62de8263d50" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.519293 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7013b72c-2c60-4174-b7e9-a62de8263d50-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "7013b72c-2c60-4174-b7e9-a62de8263d50" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.525794 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7013b72c-2c60-4174-b7e9-a62de8263d50-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "7013b72c-2c60-4174-b7e9-a62de8263d50" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.527475 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7013b72c-2c60-4174-b7e9-a62de8263d50-kube-api-access-zb2k9" (OuterVolumeSpecName: "kube-api-access-zb2k9") pod "7013b72c-2c60-4174-b7e9-a62de8263d50" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50"). InnerVolumeSpecName "kube-api-access-zb2k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.533344 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7013b72c-2c60-4174-b7e9-a62de8263d50-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "7013b72c-2c60-4174-b7e9-a62de8263d50" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.535403 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7013b72c-2c60-4174-b7e9-a62de8263d50-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "7013b72c-2c60-4174-b7e9-a62de8263d50" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.535778 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7013b72c-2c60-4174-b7e9-a62de8263d50-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "7013b72c-2c60-4174-b7e9-a62de8263d50" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.536383 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "7013b72c-2c60-4174-b7e9-a62de8263d50" (UID: "7013b72c-2c60-4174-b7e9-a62de8263d50"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.619458 4886 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7013b72c-2c60-4174-b7e9-a62de8263d50-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.619532 4886 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7013b72c-2c60-4174-b7e9-a62de8263d50-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.619553 4886 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7013b72c-2c60-4174-b7e9-a62de8263d50-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.619577 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb2k9\" (UniqueName: \"kubernetes.io/projected/7013b72c-2c60-4174-b7e9-a62de8263d50-kube-api-access-zb2k9\") on node \"crc\" DevicePath \"\"" Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.619597 4886 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7013b72c-2c60-4174-b7e9-a62de8263d50-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.619614 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7013b72c-2c60-4174-b7e9-a62de8263d50-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 21:05:35 crc kubenswrapper[4886]: I0219 21:05:35.619632 4886 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7013b72c-2c60-4174-b7e9-a62de8263d50-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 21:05:36 crc kubenswrapper[4886]: I0219 21:05:36.406981 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tp6s4" Feb 19 21:05:36 crc kubenswrapper[4886]: I0219 21:05:36.458083 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tp6s4"] Feb 19 21:05:36 crc kubenswrapper[4886]: I0219 21:05:36.467902 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tp6s4"] Feb 19 21:05:36 crc kubenswrapper[4886]: I0219 21:05:36.614058 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7013b72c-2c60-4174-b7e9-a62de8263d50" path="/var/lib/kubelet/pods/7013b72c-2c60-4174-b7e9-a62de8263d50/volumes" Feb 19 21:05:43 crc kubenswrapper[4886]: I0219 21:05:43.431976 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-rn5ms" podUID="d03f4c56-f429-4911-814d-02610d24f7ec" containerName="console" containerID="cri-o://2e57b15ff3778c21dc48216f964fff02a500f3370379a78a7ac0288d341ffe8f" gracePeriod=15 Feb 19 21:05:43 crc kubenswrapper[4886]: I0219 21:05:43.949902 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-rn5ms_d03f4c56-f429-4911-814d-02610d24f7ec/console/0.log" Feb 19 21:05:43 crc kubenswrapper[4886]: I0219 21:05:43.950338 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.151205 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d03f4c56-f429-4911-814d-02610d24f7ec-oauth-serving-cert\") pod \"d03f4c56-f429-4911-814d-02610d24f7ec\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.151403 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d03f4c56-f429-4911-814d-02610d24f7ec-console-oauth-config\") pod \"d03f4c56-f429-4911-814d-02610d24f7ec\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.151479 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d03f4c56-f429-4911-814d-02610d24f7ec-console-config\") pod \"d03f4c56-f429-4911-814d-02610d24f7ec\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.151524 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d03f4c56-f429-4911-814d-02610d24f7ec-console-serving-cert\") pod \"d03f4c56-f429-4911-814d-02610d24f7ec\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.151591 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj95l\" (UniqueName: \"kubernetes.io/projected/d03f4c56-f429-4911-814d-02610d24f7ec-kube-api-access-gj95l\") pod \"d03f4c56-f429-4911-814d-02610d24f7ec\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.151632 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d03f4c56-f429-4911-814d-02610d24f7ec-trusted-ca-bundle\") pod \"d03f4c56-f429-4911-814d-02610d24f7ec\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.151669 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d03f4c56-f429-4911-814d-02610d24f7ec-service-ca\") pod \"d03f4c56-f429-4911-814d-02610d24f7ec\" (UID: \"d03f4c56-f429-4911-814d-02610d24f7ec\") " Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.152888 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d03f4c56-f429-4911-814d-02610d24f7ec-service-ca" (OuterVolumeSpecName: "service-ca") pod "d03f4c56-f429-4911-814d-02610d24f7ec" (UID: "d03f4c56-f429-4911-814d-02610d24f7ec"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.154064 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d03f4c56-f429-4911-814d-02610d24f7ec-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d03f4c56-f429-4911-814d-02610d24f7ec" (UID: "d03f4c56-f429-4911-814d-02610d24f7ec"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.154101 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d03f4c56-f429-4911-814d-02610d24f7ec-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d03f4c56-f429-4911-814d-02610d24f7ec" (UID: "d03f4c56-f429-4911-814d-02610d24f7ec"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.154352 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d03f4c56-f429-4911-814d-02610d24f7ec-console-config" (OuterVolumeSpecName: "console-config") pod "d03f4c56-f429-4911-814d-02610d24f7ec" (UID: "d03f4c56-f429-4911-814d-02610d24f7ec"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.166063 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d03f4c56-f429-4911-814d-02610d24f7ec-kube-api-access-gj95l" (OuterVolumeSpecName: "kube-api-access-gj95l") pod "d03f4c56-f429-4911-814d-02610d24f7ec" (UID: "d03f4c56-f429-4911-814d-02610d24f7ec"). InnerVolumeSpecName "kube-api-access-gj95l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.167186 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d03f4c56-f429-4911-814d-02610d24f7ec-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d03f4c56-f429-4911-814d-02610d24f7ec" (UID: "d03f4c56-f429-4911-814d-02610d24f7ec"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.168487 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d03f4c56-f429-4911-814d-02610d24f7ec-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d03f4c56-f429-4911-814d-02610d24f7ec" (UID: "d03f4c56-f429-4911-814d-02610d24f7ec"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.254402 4886 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d03f4c56-f429-4911-814d-02610d24f7ec-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.254657 4886 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d03f4c56-f429-4911-814d-02610d24f7ec-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.254775 4886 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d03f4c56-f429-4911-814d-02610d24f7ec-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.254921 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gj95l\" (UniqueName: \"kubernetes.io/projected/d03f4c56-f429-4911-814d-02610d24f7ec-kube-api-access-gj95l\") on node \"crc\" DevicePath \"\"" Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.255047 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d03f4c56-f429-4911-814d-02610d24f7ec-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.255159 4886 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d03f4c56-f429-4911-814d-02610d24f7ec-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.255317 4886 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d03f4c56-f429-4911-814d-02610d24f7ec-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.483883 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-rn5ms_d03f4c56-f429-4911-814d-02610d24f7ec/console/0.log" Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.484559 4886 generic.go:334] "Generic (PLEG): container finished" podID="d03f4c56-f429-4911-814d-02610d24f7ec" containerID="2e57b15ff3778c21dc48216f964fff02a500f3370379a78a7ac0288d341ffe8f" exitCode=2 Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.484622 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rn5ms" event={"ID":"d03f4c56-f429-4911-814d-02610d24f7ec","Type":"ContainerDied","Data":"2e57b15ff3778c21dc48216f964fff02a500f3370379a78a7ac0288d341ffe8f"} Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.484677 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rn5ms" event={"ID":"d03f4c56-f429-4911-814d-02610d24f7ec","Type":"ContainerDied","Data":"7398c33e8ae2fc639ebb6839f096840df6221e0b1ed47c6818ca40b0e1d2daa9"} Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.484675 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rn5ms" Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.484734 4886 scope.go:117] "RemoveContainer" containerID="2e57b15ff3778c21dc48216f964fff02a500f3370379a78a7ac0288d341ffe8f" Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.522086 4886 scope.go:117] "RemoveContainer" containerID="2e57b15ff3778c21dc48216f964fff02a500f3370379a78a7ac0288d341ffe8f" Feb 19 21:05:44 crc kubenswrapper[4886]: E0219 21:05:44.524976 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e57b15ff3778c21dc48216f964fff02a500f3370379a78a7ac0288d341ffe8f\": container with ID starting with 2e57b15ff3778c21dc48216f964fff02a500f3370379a78a7ac0288d341ffe8f not found: ID does not exist" containerID="2e57b15ff3778c21dc48216f964fff02a500f3370379a78a7ac0288d341ffe8f" Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.525031 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e57b15ff3778c21dc48216f964fff02a500f3370379a78a7ac0288d341ffe8f"} err="failed to get container status \"2e57b15ff3778c21dc48216f964fff02a500f3370379a78a7ac0288d341ffe8f\": rpc error: code = NotFound desc = could not find container \"2e57b15ff3778c21dc48216f964fff02a500f3370379a78a7ac0288d341ffe8f\": container with ID starting with 2e57b15ff3778c21dc48216f964fff02a500f3370379a78a7ac0288d341ffe8f not found: ID does not exist" Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.536173 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-rn5ms"] Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.543975 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-rn5ms"] Feb 19 21:05:44 crc kubenswrapper[4886]: I0219 21:05:44.612502 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d03f4c56-f429-4911-814d-02610d24f7ec" path="/var/lib/kubelet/pods/d03f4c56-f429-4911-814d-02610d24f7ec/volumes" Feb 19 21:05:48 crc kubenswrapper[4886]: I0219 21:05:48.108472 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:48 crc kubenswrapper[4886]: I0219 21:05:48.115815 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 21:05:48 crc kubenswrapper[4886]: I0219 21:05:48.324523 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:05:48 crc kubenswrapper[4886]: I0219 21:05:48.324610 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:06:09 crc kubenswrapper[4886]: I0219 21:06:09.784595 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:06:09 crc kubenswrapper[4886]: I0219 21:06:09.831406 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:06:10 crc kubenswrapper[4886]: I0219 21:06:10.081104 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 21:06:18 crc kubenswrapper[4886]: I0219 21:06:18.324698 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:06:18 crc kubenswrapper[4886]: I0219 21:06:18.325425 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:06:18 crc kubenswrapper[4886]: I0219 21:06:18.325495 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 21:06:18 crc kubenswrapper[4886]: I0219 21:06:18.326973 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3464edb1ae3ca2be01c37ffa7dd5b104876610570320b94e212300c87c30c890"} pod="openshift-machine-config-operator/machine-config-daemon-6stm5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 21:06:18 crc kubenswrapper[4886]: I0219 21:06:18.327076 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" containerID="cri-o://3464edb1ae3ca2be01c37ffa7dd5b104876610570320b94e212300c87c30c890" gracePeriod=600 Feb 19 21:06:19 crc kubenswrapper[4886]: I0219 21:06:19.107435 4886 generic.go:334] "Generic (PLEG): container finished" podID="b096c32d-4192-4529-bc55-b05d09004007" containerID="3464edb1ae3ca2be01c37ffa7dd5b104876610570320b94e212300c87c30c890" exitCode=0 Feb 19 21:06:19 crc kubenswrapper[4886]: I0219 21:06:19.107532 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerDied","Data":"3464edb1ae3ca2be01c37ffa7dd5b104876610570320b94e212300c87c30c890"} Feb 19 21:06:19 crc kubenswrapper[4886]: I0219 21:06:19.108141 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerStarted","Data":"5ae729cd998c06e3f4a3bc7ed90125f52db03861def52b8d2aacbec9bd8a3520"} Feb 19 21:06:19 crc kubenswrapper[4886]: I0219 21:06:19.108183 4886 scope.go:117] "RemoveContainer" containerID="2904e24cd8fe99dbbf9af717a158ef3f78920d87594d09f2f28fd1f21ba945af" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.585015 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-75cd8bb48b-w5lkt"] Feb 19 21:07:01 crc kubenswrapper[4886]: E0219 21:07:01.586532 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d03f4c56-f429-4911-814d-02610d24f7ec" containerName="console" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.586568 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d03f4c56-f429-4911-814d-02610d24f7ec" containerName="console" Feb 19 21:07:01 crc kubenswrapper[4886]: E0219 21:07:01.586603 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7013b72c-2c60-4174-b7e9-a62de8263d50" containerName="registry" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.586622 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="7013b72c-2c60-4174-b7e9-a62de8263d50" containerName="registry" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.586911 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="7013b72c-2c60-4174-b7e9-a62de8263d50" containerName="registry" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.586934 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d03f4c56-f429-4911-814d-02610d24f7ec" containerName="console" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.587566 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.600008 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-75cd8bb48b-w5lkt"] Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.719459 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/65d827c6-eb13-4786-ab1a-df0067a3772e-service-ca\") pod \"console-75cd8bb48b-w5lkt\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.719805 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/65d827c6-eb13-4786-ab1a-df0067a3772e-console-config\") pod \"console-75cd8bb48b-w5lkt\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.719914 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnvjc\" (UniqueName: \"kubernetes.io/projected/65d827c6-eb13-4786-ab1a-df0067a3772e-kube-api-access-jnvjc\") pod \"console-75cd8bb48b-w5lkt\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.720015 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65d827c6-eb13-4786-ab1a-df0067a3772e-trusted-ca-bundle\") pod \"console-75cd8bb48b-w5lkt\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.720130 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/65d827c6-eb13-4786-ab1a-df0067a3772e-console-serving-cert\") pod \"console-75cd8bb48b-w5lkt\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.720242 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/65d827c6-eb13-4786-ab1a-df0067a3772e-console-oauth-config\") pod \"console-75cd8bb48b-w5lkt\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.720840 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/65d827c6-eb13-4786-ab1a-df0067a3772e-oauth-serving-cert\") pod \"console-75cd8bb48b-w5lkt\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.821622 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/65d827c6-eb13-4786-ab1a-df0067a3772e-service-ca\") pod \"console-75cd8bb48b-w5lkt\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.822124 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/65d827c6-eb13-4786-ab1a-df0067a3772e-console-config\") pod \"console-75cd8bb48b-w5lkt\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.822395 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnvjc\" (UniqueName: \"kubernetes.io/projected/65d827c6-eb13-4786-ab1a-df0067a3772e-kube-api-access-jnvjc\") pod \"console-75cd8bb48b-w5lkt\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.823007 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/65d827c6-eb13-4786-ab1a-df0067a3772e-service-ca\") pod \"console-75cd8bb48b-w5lkt\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.823576 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65d827c6-eb13-4786-ab1a-df0067a3772e-trusted-ca-bundle\") pod \"console-75cd8bb48b-w5lkt\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.823640 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/65d827c6-eb13-4786-ab1a-df0067a3772e-console-config\") pod \"console-75cd8bb48b-w5lkt\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.824169 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/65d827c6-eb13-4786-ab1a-df0067a3772e-console-serving-cert\") pod \"console-75cd8bb48b-w5lkt\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.824677 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65d827c6-eb13-4786-ab1a-df0067a3772e-trusted-ca-bundle\") pod \"console-75cd8bb48b-w5lkt\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.826031 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/65d827c6-eb13-4786-ab1a-df0067a3772e-console-oauth-config\") pod \"console-75cd8bb48b-w5lkt\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.826222 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/65d827c6-eb13-4786-ab1a-df0067a3772e-oauth-serving-cert\") pod \"console-75cd8bb48b-w5lkt\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.827831 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/65d827c6-eb13-4786-ab1a-df0067a3772e-oauth-serving-cert\") pod \"console-75cd8bb48b-w5lkt\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.833039 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/65d827c6-eb13-4786-ab1a-df0067a3772e-console-oauth-config\") pod \"console-75cd8bb48b-w5lkt\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.833187 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/65d827c6-eb13-4786-ab1a-df0067a3772e-console-serving-cert\") pod \"console-75cd8bb48b-w5lkt\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.853029 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnvjc\" (UniqueName: \"kubernetes.io/projected/65d827c6-eb13-4786-ab1a-df0067a3772e-kube-api-access-jnvjc\") pod \"console-75cd8bb48b-w5lkt\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:01 crc kubenswrapper[4886]: I0219 21:07:01.920712 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:02 crc kubenswrapper[4886]: I0219 21:07:02.266888 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-75cd8bb48b-w5lkt"] Feb 19 21:07:02 crc kubenswrapper[4886]: I0219 21:07:02.485393 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75cd8bb48b-w5lkt" event={"ID":"65d827c6-eb13-4786-ab1a-df0067a3772e","Type":"ContainerStarted","Data":"9e5c7c9d8cb2ecddca04123c922dc68b54834ef4d05b0594ade71e295052b0ca"} Feb 19 21:07:02 crc kubenswrapper[4886]: I0219 21:07:02.485467 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75cd8bb48b-w5lkt" event={"ID":"65d827c6-eb13-4786-ab1a-df0067a3772e","Type":"ContainerStarted","Data":"2409cee877d8620d93013d5eafc035c75cc9e81853726e7d807e5e96415273d3"} Feb 19 21:07:02 crc kubenswrapper[4886]: I0219 21:07:02.528174 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-75cd8bb48b-w5lkt" podStartSLOduration=1.5281355639999998 podStartE2EDuration="1.528135564s" podCreationTimestamp="2026-02-19 21:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:07:02.525484849 +0000 UTC m=+453.153327929" watchObservedRunningTime="2026-02-19 21:07:02.528135564 +0000 UTC m=+453.155978644" Feb 19 21:07:11 crc kubenswrapper[4886]: I0219 21:07:11.921790 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:11 crc kubenswrapper[4886]: I0219 21:07:11.923084 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:11 crc kubenswrapper[4886]: I0219 21:07:11.929097 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:12 crc kubenswrapper[4886]: I0219 21:07:12.578079 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:07:12 crc kubenswrapper[4886]: I0219 21:07:12.657094 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-557fd44cbd-chqz4"] Feb 19 21:07:37 crc kubenswrapper[4886]: I0219 21:07:37.709791 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-557fd44cbd-chqz4" podUID="dc106b01-da5f-4c08-a35f-5c83e6c2114d" containerName="console" containerID="cri-o://0cd1d6e33c1808ab49b0f3703aeda192986ff2ed21e06f63991cff67e4032ce8" gracePeriod=15 Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.642831 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-557fd44cbd-chqz4_dc106b01-da5f-4c08-a35f-5c83e6c2114d/console/0.log" Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.643301 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.787674 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dc106b01-da5f-4c08-a35f-5c83e6c2114d-console-oauth-config\") pod \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.787900 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dc106b01-da5f-4c08-a35f-5c83e6c2114d-service-ca\") pod \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.788030 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc106b01-da5f-4c08-a35f-5c83e6c2114d-trusted-ca-bundle\") pod \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.788093 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf4gr\" (UniqueName: \"kubernetes.io/projected/dc106b01-da5f-4c08-a35f-5c83e6c2114d-kube-api-access-jf4gr\") pod \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.788137 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dc106b01-da5f-4c08-a35f-5c83e6c2114d-console-serving-cert\") pod \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.788216 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dc106b01-da5f-4c08-a35f-5c83e6c2114d-oauth-serving-cert\") pod \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.788254 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dc106b01-da5f-4c08-a35f-5c83e6c2114d-console-config\") pod \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\" (UID: \"dc106b01-da5f-4c08-a35f-5c83e6c2114d\") " Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.789676 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc106b01-da5f-4c08-a35f-5c83e6c2114d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "dc106b01-da5f-4c08-a35f-5c83e6c2114d" (UID: "dc106b01-da5f-4c08-a35f-5c83e6c2114d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.789694 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc106b01-da5f-4c08-a35f-5c83e6c2114d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dc106b01-da5f-4c08-a35f-5c83e6c2114d" (UID: "dc106b01-da5f-4c08-a35f-5c83e6c2114d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.790091 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc106b01-da5f-4c08-a35f-5c83e6c2114d-console-config" (OuterVolumeSpecName: "console-config") pod "dc106b01-da5f-4c08-a35f-5c83e6c2114d" (UID: "dc106b01-da5f-4c08-a35f-5c83e6c2114d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.790414 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc106b01-da5f-4c08-a35f-5c83e6c2114d-service-ca" (OuterVolumeSpecName: "service-ca") pod "dc106b01-da5f-4c08-a35f-5c83e6c2114d" (UID: "dc106b01-da5f-4c08-a35f-5c83e6c2114d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.797661 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc106b01-da5f-4c08-a35f-5c83e6c2114d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "dc106b01-da5f-4c08-a35f-5c83e6c2114d" (UID: "dc106b01-da5f-4c08-a35f-5c83e6c2114d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.798854 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc106b01-da5f-4c08-a35f-5c83e6c2114d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "dc106b01-da5f-4c08-a35f-5c83e6c2114d" (UID: "dc106b01-da5f-4c08-a35f-5c83e6c2114d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.801414 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-557fd44cbd-chqz4_dc106b01-da5f-4c08-a35f-5c83e6c2114d/console/0.log" Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.801485 4886 generic.go:334] "Generic (PLEG): container finished" podID="dc106b01-da5f-4c08-a35f-5c83e6c2114d" containerID="0cd1d6e33c1808ab49b0f3703aeda192986ff2ed21e06f63991cff67e4032ce8" exitCode=2 Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.801523 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-557fd44cbd-chqz4" event={"ID":"dc106b01-da5f-4c08-a35f-5c83e6c2114d","Type":"ContainerDied","Data":"0cd1d6e33c1808ab49b0f3703aeda192986ff2ed21e06f63991cff67e4032ce8"} Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.801558 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-557fd44cbd-chqz4" event={"ID":"dc106b01-da5f-4c08-a35f-5c83e6c2114d","Type":"ContainerDied","Data":"be0cc2e6d1777d0bc6aff0af2471842f89d07ba556efc081e1eadd19eebc6036"} Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.801587 4886 scope.go:117] "RemoveContainer" containerID="0cd1d6e33c1808ab49b0f3703aeda192986ff2ed21e06f63991cff67e4032ce8" Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.801611 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-557fd44cbd-chqz4" Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.802587 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc106b01-da5f-4c08-a35f-5c83e6c2114d-kube-api-access-jf4gr" (OuterVolumeSpecName: "kube-api-access-jf4gr") pod "dc106b01-da5f-4c08-a35f-5c83e6c2114d" (UID: "dc106b01-da5f-4c08-a35f-5c83e6c2114d"). InnerVolumeSpecName "kube-api-access-jf4gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.872023 4886 scope.go:117] "RemoveContainer" containerID="0cd1d6e33c1808ab49b0f3703aeda192986ff2ed21e06f63991cff67e4032ce8" Feb 19 21:07:38 crc kubenswrapper[4886]: E0219 21:07:38.872657 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cd1d6e33c1808ab49b0f3703aeda192986ff2ed21e06f63991cff67e4032ce8\": container with ID starting with 0cd1d6e33c1808ab49b0f3703aeda192986ff2ed21e06f63991cff67e4032ce8 not found: ID does not exist" containerID="0cd1d6e33c1808ab49b0f3703aeda192986ff2ed21e06f63991cff67e4032ce8" Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.872717 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cd1d6e33c1808ab49b0f3703aeda192986ff2ed21e06f63991cff67e4032ce8"} err="failed to get container status \"0cd1d6e33c1808ab49b0f3703aeda192986ff2ed21e06f63991cff67e4032ce8\": rpc error: code = NotFound desc = could not find container \"0cd1d6e33c1808ab49b0f3703aeda192986ff2ed21e06f63991cff67e4032ce8\": container with ID starting with 0cd1d6e33c1808ab49b0f3703aeda192986ff2ed21e06f63991cff67e4032ce8 not found: ID does not exist" Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.890219 4886 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dc106b01-da5f-4c08-a35f-5c83e6c2114d-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.890279 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc106b01-da5f-4c08-a35f-5c83e6c2114d-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.890299 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jf4gr\" (UniqueName: \"kubernetes.io/projected/dc106b01-da5f-4c08-a35f-5c83e6c2114d-kube-api-access-jf4gr\") on node \"crc\" DevicePath \"\"" Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.890311 4886 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dc106b01-da5f-4c08-a35f-5c83e6c2114d-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.890323 4886 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dc106b01-da5f-4c08-a35f-5c83e6c2114d-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.890334 4886 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dc106b01-da5f-4c08-a35f-5c83e6c2114d-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:07:38 crc kubenswrapper[4886]: I0219 21:07:38.890344 4886 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dc106b01-da5f-4c08-a35f-5c83e6c2114d-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:07:39 crc kubenswrapper[4886]: I0219 21:07:39.153117 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-557fd44cbd-chqz4"] Feb 19 21:07:39 crc kubenswrapper[4886]: I0219 21:07:39.162054 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-557fd44cbd-chqz4"] Feb 19 21:07:40 crc kubenswrapper[4886]: I0219 21:07:40.613379 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc106b01-da5f-4c08-a35f-5c83e6c2114d" path="/var/lib/kubelet/pods/dc106b01-da5f-4c08-a35f-5c83e6c2114d/volumes" Feb 19 21:08:18 crc kubenswrapper[4886]: I0219 21:08:18.325239 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:08:18 crc kubenswrapper[4886]: I0219 21:08:18.326120 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:08:30 crc kubenswrapper[4886]: I0219 21:08:30.840037 4886 scope.go:117] "RemoveContainer" containerID="d1ab94ca338e010d51fde28548ff9642099171bfdd1a498addb59ab6c3e79bb3" Feb 19 21:08:46 crc kubenswrapper[4886]: I0219 21:08:46.310539 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2"] Feb 19 21:08:46 crc kubenswrapper[4886]: E0219 21:08:46.311621 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc106b01-da5f-4c08-a35f-5c83e6c2114d" containerName="console" Feb 19 21:08:46 crc kubenswrapper[4886]: I0219 21:08:46.311646 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc106b01-da5f-4c08-a35f-5c83e6c2114d" containerName="console" Feb 19 21:08:46 crc kubenswrapper[4886]: I0219 21:08:46.311981 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc106b01-da5f-4c08-a35f-5c83e6c2114d" containerName="console" Feb 19 21:08:46 crc kubenswrapper[4886]: I0219 21:08:46.313727 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2" Feb 19 21:08:46 crc kubenswrapper[4886]: I0219 21:08:46.320984 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 21:08:46 crc kubenswrapper[4886]: I0219 21:08:46.333754 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2"] Feb 19 21:08:46 crc kubenswrapper[4886]: I0219 21:08:46.455545 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjv88\" (UniqueName: \"kubernetes.io/projected/06ecbcd7-7b58-4a22-8f21-efb01efb0e07-kube-api-access-pjv88\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2\" (UID: \"06ecbcd7-7b58-4a22-8f21-efb01efb0e07\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2" Feb 19 21:08:46 crc kubenswrapper[4886]: I0219 21:08:46.455823 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/06ecbcd7-7b58-4a22-8f21-efb01efb0e07-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2\" (UID: \"06ecbcd7-7b58-4a22-8f21-efb01efb0e07\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2" Feb 19 21:08:46 crc kubenswrapper[4886]: I0219 21:08:46.456038 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/06ecbcd7-7b58-4a22-8f21-efb01efb0e07-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2\" (UID: \"06ecbcd7-7b58-4a22-8f21-efb01efb0e07\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2" Feb 19 21:08:46 crc kubenswrapper[4886]: I0219 21:08:46.557631 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/06ecbcd7-7b58-4a22-8f21-efb01efb0e07-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2\" (UID: \"06ecbcd7-7b58-4a22-8f21-efb01efb0e07\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2" Feb 19 21:08:46 crc kubenswrapper[4886]: I0219 21:08:46.557721 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjv88\" (UniqueName: \"kubernetes.io/projected/06ecbcd7-7b58-4a22-8f21-efb01efb0e07-kube-api-access-pjv88\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2\" (UID: \"06ecbcd7-7b58-4a22-8f21-efb01efb0e07\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2" Feb 19 21:08:46 crc kubenswrapper[4886]: I0219 21:08:46.557861 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/06ecbcd7-7b58-4a22-8f21-efb01efb0e07-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2\" (UID: \"06ecbcd7-7b58-4a22-8f21-efb01efb0e07\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2" Feb 19 21:08:46 crc kubenswrapper[4886]: I0219 21:08:46.558556 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/06ecbcd7-7b58-4a22-8f21-efb01efb0e07-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2\" (UID: \"06ecbcd7-7b58-4a22-8f21-efb01efb0e07\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2" Feb 19 21:08:46 crc kubenswrapper[4886]: I0219 21:08:46.558703 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/06ecbcd7-7b58-4a22-8f21-efb01efb0e07-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2\" (UID: \"06ecbcd7-7b58-4a22-8f21-efb01efb0e07\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2" Feb 19 21:08:46 crc kubenswrapper[4886]: I0219 21:08:46.591690 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjv88\" (UniqueName: \"kubernetes.io/projected/06ecbcd7-7b58-4a22-8f21-efb01efb0e07-kube-api-access-pjv88\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2\" (UID: \"06ecbcd7-7b58-4a22-8f21-efb01efb0e07\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2" Feb 19 21:08:46 crc kubenswrapper[4886]: I0219 21:08:46.633653 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2" Feb 19 21:08:47 crc kubenswrapper[4886]: I0219 21:08:47.070770 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2"] Feb 19 21:08:47 crc kubenswrapper[4886]: I0219 21:08:47.336167 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2" event={"ID":"06ecbcd7-7b58-4a22-8f21-efb01efb0e07","Type":"ContainerStarted","Data":"737fd1dba373bf85a47a073ec9dcc577e5139754cb5098eba4dd6e0c3df9db16"} Feb 19 21:08:47 crc kubenswrapper[4886]: I0219 21:08:47.336299 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2" event={"ID":"06ecbcd7-7b58-4a22-8f21-efb01efb0e07","Type":"ContainerStarted","Data":"934f652272b8df8eda76665a1b12a5ee675000c7600b41f31cd0ccec79c33a9a"} Feb 19 21:08:48 crc kubenswrapper[4886]: I0219 21:08:48.324334 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:08:48 crc kubenswrapper[4886]: I0219 21:08:48.324690 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:08:48 crc kubenswrapper[4886]: I0219 21:08:48.343080 4886 generic.go:334] "Generic (PLEG): container finished" podID="06ecbcd7-7b58-4a22-8f21-efb01efb0e07" containerID="737fd1dba373bf85a47a073ec9dcc577e5139754cb5098eba4dd6e0c3df9db16" exitCode=0 Feb 19 21:08:48 crc kubenswrapper[4886]: I0219 21:08:48.343154 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2" event={"ID":"06ecbcd7-7b58-4a22-8f21-efb01efb0e07","Type":"ContainerDied","Data":"737fd1dba373bf85a47a073ec9dcc577e5139754cb5098eba4dd6e0c3df9db16"} Feb 19 21:08:48 crc kubenswrapper[4886]: I0219 21:08:48.344820 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 21:08:50 crc kubenswrapper[4886]: I0219 21:08:50.361249 4886 generic.go:334] "Generic (PLEG): container finished" podID="06ecbcd7-7b58-4a22-8f21-efb01efb0e07" containerID="3eb6d471c1bf4d1db8165ffa3938ac62adb47e130e6ebedd362fba77e937fc23" exitCode=0 Feb 19 21:08:50 crc kubenswrapper[4886]: I0219 21:08:50.361511 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2" event={"ID":"06ecbcd7-7b58-4a22-8f21-efb01efb0e07","Type":"ContainerDied","Data":"3eb6d471c1bf4d1db8165ffa3938ac62adb47e130e6ebedd362fba77e937fc23"} Feb 19 21:08:51 crc kubenswrapper[4886]: I0219 21:08:51.372625 4886 generic.go:334] "Generic (PLEG): container finished" podID="06ecbcd7-7b58-4a22-8f21-efb01efb0e07" containerID="0f62cf7c36bb51db003c18d83e932d3093f682c7287574a4723e42d75dc855cd" exitCode=0 Feb 19 21:08:51 crc kubenswrapper[4886]: I0219 21:08:51.372749 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2" event={"ID":"06ecbcd7-7b58-4a22-8f21-efb01efb0e07","Type":"ContainerDied","Data":"0f62cf7c36bb51db003c18d83e932d3093f682c7287574a4723e42d75dc855cd"} Feb 19 21:08:52 crc kubenswrapper[4886]: I0219 21:08:52.683054 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2" Feb 19 21:08:52 crc kubenswrapper[4886]: I0219 21:08:52.881130 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/06ecbcd7-7b58-4a22-8f21-efb01efb0e07-bundle\") pod \"06ecbcd7-7b58-4a22-8f21-efb01efb0e07\" (UID: \"06ecbcd7-7b58-4a22-8f21-efb01efb0e07\") " Feb 19 21:08:52 crc kubenswrapper[4886]: I0219 21:08:52.881303 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/06ecbcd7-7b58-4a22-8f21-efb01efb0e07-util\") pod \"06ecbcd7-7b58-4a22-8f21-efb01efb0e07\" (UID: \"06ecbcd7-7b58-4a22-8f21-efb01efb0e07\") " Feb 19 21:08:52 crc kubenswrapper[4886]: I0219 21:08:52.881375 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjv88\" (UniqueName: \"kubernetes.io/projected/06ecbcd7-7b58-4a22-8f21-efb01efb0e07-kube-api-access-pjv88\") pod \"06ecbcd7-7b58-4a22-8f21-efb01efb0e07\" (UID: \"06ecbcd7-7b58-4a22-8f21-efb01efb0e07\") " Feb 19 21:08:52 crc kubenswrapper[4886]: I0219 21:08:52.884884 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06ecbcd7-7b58-4a22-8f21-efb01efb0e07-bundle" (OuterVolumeSpecName: "bundle") pod "06ecbcd7-7b58-4a22-8f21-efb01efb0e07" (UID: "06ecbcd7-7b58-4a22-8f21-efb01efb0e07"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:08:52 crc kubenswrapper[4886]: I0219 21:08:52.890812 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06ecbcd7-7b58-4a22-8f21-efb01efb0e07-kube-api-access-pjv88" (OuterVolumeSpecName: "kube-api-access-pjv88") pod "06ecbcd7-7b58-4a22-8f21-efb01efb0e07" (UID: "06ecbcd7-7b58-4a22-8f21-efb01efb0e07"). InnerVolumeSpecName "kube-api-access-pjv88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:08:52 crc kubenswrapper[4886]: I0219 21:08:52.983332 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjv88\" (UniqueName: \"kubernetes.io/projected/06ecbcd7-7b58-4a22-8f21-efb01efb0e07-kube-api-access-pjv88\") on node \"crc\" DevicePath \"\"" Feb 19 21:08:52 crc kubenswrapper[4886]: I0219 21:08:52.983389 4886 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/06ecbcd7-7b58-4a22-8f21-efb01efb0e07-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:08:53 crc kubenswrapper[4886]: I0219 21:08:53.040722 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06ecbcd7-7b58-4a22-8f21-efb01efb0e07-util" (OuterVolumeSpecName: "util") pod "06ecbcd7-7b58-4a22-8f21-efb01efb0e07" (UID: "06ecbcd7-7b58-4a22-8f21-efb01efb0e07"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:08:53 crc kubenswrapper[4886]: I0219 21:08:53.085038 4886 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/06ecbcd7-7b58-4a22-8f21-efb01efb0e07-util\") on node \"crc\" DevicePath \"\"" Feb 19 21:08:53 crc kubenswrapper[4886]: I0219 21:08:53.390251 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2" event={"ID":"06ecbcd7-7b58-4a22-8f21-efb01efb0e07","Type":"ContainerDied","Data":"934f652272b8df8eda76665a1b12a5ee675000c7600b41f31cd0ccec79c33a9a"} Feb 19 21:08:53 crc kubenswrapper[4886]: I0219 21:08:53.390303 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="934f652272b8df8eda76665a1b12a5ee675000c7600b41f31cd0ccec79c33a9a" Feb 19 21:08:53 crc kubenswrapper[4886]: I0219 21:08:53.390327 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g64p2" Feb 19 21:08:57 crc kubenswrapper[4886]: I0219 21:08:57.588686 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nclwh"] Feb 19 21:08:57 crc kubenswrapper[4886]: I0219 21:08:57.589581 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="ovn-controller" containerID="cri-o://8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0" gracePeriod=30 Feb 19 21:08:57 crc kubenswrapper[4886]: I0219 21:08:57.589702 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="northd" containerID="cri-o://82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828" gracePeriod=30 Feb 19 21:08:57 crc kubenswrapper[4886]: I0219 21:08:57.589743 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="ovn-acl-logging" containerID="cri-o://2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e" gracePeriod=30 Feb 19 21:08:57 crc kubenswrapper[4886]: I0219 21:08:57.589707 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="sbdb" containerID="cri-o://76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548" gracePeriod=30 Feb 19 21:08:57 crc kubenswrapper[4886]: I0219 21:08:57.589766 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="kube-rbac-proxy-node" containerID="cri-o://d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379" gracePeriod=30 Feb 19 21:08:57 crc kubenswrapper[4886]: I0219 21:08:57.589847 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a" gracePeriod=30 Feb 19 21:08:57 crc kubenswrapper[4886]: I0219 21:08:57.589957 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="nbdb" containerID="cri-o://f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16" gracePeriod=30 Feb 19 21:08:57 crc kubenswrapper[4886]: I0219 21:08:57.689811 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="ovnkube-controller" containerID="cri-o://ed1f61bce5fe0d72a072fa0720c7c32c59b08203d401027006d86b34f26ec8a5" gracePeriod=30 Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.425724 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rnffz_83f8fca5-68c6-4300-b2d8-64a58bf92a64/kube-multus/2.log" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.426227 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rnffz_83f8fca5-68c6-4300-b2d8-64a58bf92a64/kube-multus/1.log" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.426298 4886 generic.go:334] "Generic (PLEG): container finished" podID="83f8fca5-68c6-4300-b2d8-64a58bf92a64" containerID="b1d24b72a538ea2a24b25c0abe1e665851c85706dcd18e501c91a69c90bfa883" exitCode=2 Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.426374 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rnffz" event={"ID":"83f8fca5-68c6-4300-b2d8-64a58bf92a64","Type":"ContainerDied","Data":"b1d24b72a538ea2a24b25c0abe1e665851c85706dcd18e501c91a69c90bfa883"} Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.426433 4886 scope.go:117] "RemoveContainer" containerID="0732cf035727cf4d641fbec8677771a7eb77d51b7a2b7e60ee9566b3eb62a0ad" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.426938 4886 scope.go:117] "RemoveContainer" containerID="b1d24b72a538ea2a24b25c0abe1e665851c85706dcd18e501c91a69c90bfa883" Feb 19 21:08:58 crc kubenswrapper[4886]: E0219 21:08:58.427135 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-rnffz_openshift-multus(83f8fca5-68c6-4300-b2d8-64a58bf92a64)\"" pod="openshift-multus/multus-rnffz" podUID="83f8fca5-68c6-4300-b2d8-64a58bf92a64" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.439336 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nclwh_87d8f125-379b-4e5a-bedc-b55cf9edb00a/ovnkube-controller/3.log" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.442015 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nclwh_87d8f125-379b-4e5a-bedc-b55cf9edb00a/ovn-acl-logging/0.log" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.442536 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nclwh_87d8f125-379b-4e5a-bedc-b55cf9edb00a/ovn-controller/0.log" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.442906 4886 generic.go:334] "Generic (PLEG): container finished" podID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerID="ed1f61bce5fe0d72a072fa0720c7c32c59b08203d401027006d86b34f26ec8a5" exitCode=0 Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.442936 4886 generic.go:334] "Generic (PLEG): container finished" podID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerID="76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548" exitCode=0 Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.442947 4886 generic.go:334] "Generic (PLEG): container finished" podID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerID="f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16" exitCode=0 Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.442953 4886 generic.go:334] "Generic (PLEG): container finished" podID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerID="82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828" exitCode=0 Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.442960 4886 generic.go:334] "Generic (PLEG): container finished" podID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerID="2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e" exitCode=143 Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.442970 4886 generic.go:334] "Generic (PLEG): container finished" podID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerID="8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0" exitCode=143 Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.442977 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerDied","Data":"ed1f61bce5fe0d72a072fa0720c7c32c59b08203d401027006d86b34f26ec8a5"} Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.443016 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerDied","Data":"76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548"} Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.443027 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerDied","Data":"f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16"} Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.443035 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerDied","Data":"82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828"} Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.443046 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerDied","Data":"2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e"} Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.443056 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerDied","Data":"8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0"} Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.459370 4886 scope.go:117] "RemoveContainer" containerID="1d5443a6295b93c79e43e0db5590935fe8e061ba198cd9dc1c9115f431aa8e93" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.789947 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nclwh_87d8f125-379b-4e5a-bedc-b55cf9edb00a/ovn-acl-logging/0.log" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.790404 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nclwh_87d8f125-379b-4e5a-bedc-b55cf9edb00a/ovn-controller/0.log" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.790769 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.882658 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmjqf\" (UniqueName: \"kubernetes.io/projected/87d8f125-379b-4e5a-bedc-b55cf9edb00a-kube-api-access-pmjqf\") pod \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.882713 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-run-ovn-kubernetes\") pod \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.882736 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-slash\") pod \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.882757 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-run-systemd\") pod \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.882776 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-kubelet\") pod \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.882809 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-cni-netd\") pod \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.882826 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.882849 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-log-socket\") pod \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.882868 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-run-netns\") pod \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.882885 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87d8f125-379b-4e5a-bedc-b55cf9edb00a-ovn-node-metrics-cert\") pod \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.882900 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87d8f125-379b-4e5a-bedc-b55cf9edb00a-ovnkube-script-lib\") pod \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.882925 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-run-ovn\") pod \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.882941 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-var-lib-openvswitch\") pod \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.882958 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87d8f125-379b-4e5a-bedc-b55cf9edb00a-env-overrides\") pod \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.882990 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-cni-bin\") pod \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.883008 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-systemd-units\") pod \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.883033 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-etc-openvswitch\") pod \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.883048 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-node-log\") pod \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.883060 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-run-openvswitch\") pod \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.883080 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87d8f125-379b-4e5a-bedc-b55cf9edb00a-ovnkube-config\") pod \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\" (UID: \"87d8f125-379b-4e5a-bedc-b55cf9edb00a\") " Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.883631 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87d8f125-379b-4e5a-bedc-b55cf9edb00a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "87d8f125-379b-4e5a-bedc-b55cf9edb00a" (UID: "87d8f125-379b-4e5a-bedc-b55cf9edb00a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.884320 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "87d8f125-379b-4e5a-bedc-b55cf9edb00a" (UID: "87d8f125-379b-4e5a-bedc-b55cf9edb00a"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.884378 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "87d8f125-379b-4e5a-bedc-b55cf9edb00a" (UID: "87d8f125-379b-4e5a-bedc-b55cf9edb00a"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.884387 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "87d8f125-379b-4e5a-bedc-b55cf9edb00a" (UID: "87d8f125-379b-4e5a-bedc-b55cf9edb00a"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.884376 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "87d8f125-379b-4e5a-bedc-b55cf9edb00a" (UID: "87d8f125-379b-4e5a-bedc-b55cf9edb00a"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.884444 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "87d8f125-379b-4e5a-bedc-b55cf9edb00a" (UID: "87d8f125-379b-4e5a-bedc-b55cf9edb00a"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.884467 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-slash" (OuterVolumeSpecName: "host-slash") pod "87d8f125-379b-4e5a-bedc-b55cf9edb00a" (UID: "87d8f125-379b-4e5a-bedc-b55cf9edb00a"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.884675 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87d8f125-379b-4e5a-bedc-b55cf9edb00a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "87d8f125-379b-4e5a-bedc-b55cf9edb00a" (UID: "87d8f125-379b-4e5a-bedc-b55cf9edb00a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.884711 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "87d8f125-379b-4e5a-bedc-b55cf9edb00a" (UID: "87d8f125-379b-4e5a-bedc-b55cf9edb00a"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.884733 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "87d8f125-379b-4e5a-bedc-b55cf9edb00a" (UID: "87d8f125-379b-4e5a-bedc-b55cf9edb00a"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.884752 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-log-socket" (OuterVolumeSpecName: "log-socket") pod "87d8f125-379b-4e5a-bedc-b55cf9edb00a" (UID: "87d8f125-379b-4e5a-bedc-b55cf9edb00a"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.884771 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "87d8f125-379b-4e5a-bedc-b55cf9edb00a" (UID: "87d8f125-379b-4e5a-bedc-b55cf9edb00a"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.884774 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87d8f125-379b-4e5a-bedc-b55cf9edb00a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "87d8f125-379b-4e5a-bedc-b55cf9edb00a" (UID: "87d8f125-379b-4e5a-bedc-b55cf9edb00a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.884791 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "87d8f125-379b-4e5a-bedc-b55cf9edb00a" (UID: "87d8f125-379b-4e5a-bedc-b55cf9edb00a"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.884805 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "87d8f125-379b-4e5a-bedc-b55cf9edb00a" (UID: "87d8f125-379b-4e5a-bedc-b55cf9edb00a"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.884812 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-node-log" (OuterVolumeSpecName: "node-log") pod "87d8f125-379b-4e5a-bedc-b55cf9edb00a" (UID: "87d8f125-379b-4e5a-bedc-b55cf9edb00a"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.884836 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "87d8f125-379b-4e5a-bedc-b55cf9edb00a" (UID: "87d8f125-379b-4e5a-bedc-b55cf9edb00a"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.897911 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87d8f125-379b-4e5a-bedc-b55cf9edb00a-kube-api-access-pmjqf" (OuterVolumeSpecName: "kube-api-access-pmjqf") pod "87d8f125-379b-4e5a-bedc-b55cf9edb00a" (UID: "87d8f125-379b-4e5a-bedc-b55cf9edb00a"). InnerVolumeSpecName "kube-api-access-pmjqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.901713 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87d8f125-379b-4e5a-bedc-b55cf9edb00a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "87d8f125-379b-4e5a-bedc-b55cf9edb00a" (UID: "87d8f125-379b-4e5a-bedc-b55cf9edb00a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.914678 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5vhjh"] Feb 19 21:08:58 crc kubenswrapper[4886]: E0219 21:08:58.914876 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="kubecfg-setup" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.914892 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="kubecfg-setup" Feb 19 21:08:58 crc kubenswrapper[4886]: E0219 21:08:58.914904 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="ovnkube-controller" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.914910 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="ovnkube-controller" Feb 19 21:08:58 crc kubenswrapper[4886]: E0219 21:08:58.914917 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ecbcd7-7b58-4a22-8f21-efb01efb0e07" containerName="util" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.914923 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ecbcd7-7b58-4a22-8f21-efb01efb0e07" containerName="util" Feb 19 21:08:58 crc kubenswrapper[4886]: E0219 21:08:58.914932 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ecbcd7-7b58-4a22-8f21-efb01efb0e07" containerName="extract" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.914937 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ecbcd7-7b58-4a22-8f21-efb01efb0e07" containerName="extract" Feb 19 21:08:58 crc kubenswrapper[4886]: E0219 21:08:58.914947 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ecbcd7-7b58-4a22-8f21-efb01efb0e07" containerName="pull" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.914952 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ecbcd7-7b58-4a22-8f21-efb01efb0e07" containerName="pull" Feb 19 21:08:58 crc kubenswrapper[4886]: E0219 21:08:58.914961 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="northd" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.914967 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="northd" Feb 19 21:08:58 crc kubenswrapper[4886]: E0219 21:08:58.914978 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="nbdb" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.914984 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="nbdb" Feb 19 21:08:58 crc kubenswrapper[4886]: E0219 21:08:58.914993 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="kube-rbac-proxy-node" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.914999 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="kube-rbac-proxy-node" Feb 19 21:08:58 crc kubenswrapper[4886]: E0219 21:08:58.915009 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="ovnkube-controller" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.915015 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="ovnkube-controller" Feb 19 21:08:58 crc kubenswrapper[4886]: E0219 21:08:58.915025 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.915030 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 21:08:58 crc kubenswrapper[4886]: E0219 21:08:58.915038 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="sbdb" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.915045 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="sbdb" Feb 19 21:08:58 crc kubenswrapper[4886]: E0219 21:08:58.915054 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="ovnkube-controller" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.915059 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="ovnkube-controller" Feb 19 21:08:58 crc kubenswrapper[4886]: E0219 21:08:58.915066 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="ovn-controller" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.915072 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="ovn-controller" Feb 19 21:08:58 crc kubenswrapper[4886]: E0219 21:08:58.915079 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="ovn-acl-logging" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.915085 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="ovn-acl-logging" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.915186 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.915197 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="ovnkube-controller" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.915205 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="ovn-controller" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.915214 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ecbcd7-7b58-4a22-8f21-efb01efb0e07" containerName="extract" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.915221 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="ovn-acl-logging" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.915228 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="ovnkube-controller" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.915235 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="ovnkube-controller" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.915241 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="northd" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.915249 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="nbdb" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.915273 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="sbdb" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.915282 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="kube-rbac-proxy-node" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.915290 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="ovnkube-controller" Feb 19 21:08:58 crc kubenswrapper[4886]: E0219 21:08:58.915395 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="ovnkube-controller" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.915403 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="ovnkube-controller" Feb 19 21:08:58 crc kubenswrapper[4886]: E0219 21:08:58.915415 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="ovnkube-controller" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.915420 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="ovnkube-controller" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.915517 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerName="ovnkube-controller" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.919510 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "87d8f125-379b-4e5a-bedc-b55cf9edb00a" (UID: "87d8f125-379b-4e5a-bedc-b55cf9edb00a"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.926227 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.984096 4886 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.984376 4886 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.984387 4886 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-log-socket\") on node \"crc\" DevicePath \"\"" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.984397 4886 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.984407 4886 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87d8f125-379b-4e5a-bedc-b55cf9edb00a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.984417 4886 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87d8f125-379b-4e5a-bedc-b55cf9edb00a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.984425 4886 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.984433 4886 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.984441 4886 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87d8f125-379b-4e5a-bedc-b55cf9edb00a-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.984449 4886 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.984456 4886 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.984466 4886 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.984473 4886 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-node-log\") on node \"crc\" DevicePath \"\"" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.984481 4886 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.984489 4886 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87d8f125-379b-4e5a-bedc-b55cf9edb00a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.984497 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmjqf\" (UniqueName: \"kubernetes.io/projected/87d8f125-379b-4e5a-bedc-b55cf9edb00a-kube-api-access-pmjqf\") on node \"crc\" DevicePath \"\"" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.984505 4886 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.984513 4886 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-slash\") on node \"crc\" DevicePath \"\"" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.984520 4886 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 19 21:08:58 crc kubenswrapper[4886]: I0219 21:08:58.984528 4886 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87d8f125-379b-4e5a-bedc-b55cf9edb00a-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.086025 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-host-run-netns\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.086070 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-node-log\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.086099 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-host-cni-bin\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.086116 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-var-lib-openvswitch\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.086133 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbwpt\" (UniqueName: \"kubernetes.io/projected/c735ed2a-c0e6-4833-a21d-4adb485e1101-kube-api-access-jbwpt\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.086287 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-systemd-units\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.086329 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-run-ovn\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.086347 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c735ed2a-c0e6-4833-a21d-4adb485e1101-ovn-node-metrics-cert\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.086377 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c735ed2a-c0e6-4833-a21d-4adb485e1101-ovnkube-script-lib\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.086393 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-host-run-ovn-kubernetes\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.086424 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-host-slash\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.086458 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-host-cni-netd\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.086474 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c735ed2a-c0e6-4833-a21d-4adb485e1101-ovnkube-config\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.086501 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-host-kubelet\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.086546 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-etc-openvswitch\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.086578 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-run-systemd\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.086630 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-run-openvswitch\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.086668 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.086709 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-log-socket\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.086733 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c735ed2a-c0e6-4833-a21d-4adb485e1101-env-overrides\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.192587 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-host-run-netns\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.192715 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-host-run-netns\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.192711 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-node-log\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.192841 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-host-cni-bin\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.192857 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-node-log\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.192878 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-var-lib-openvswitch\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.192917 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbwpt\" (UniqueName: \"kubernetes.io/projected/c735ed2a-c0e6-4833-a21d-4adb485e1101-kube-api-access-jbwpt\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.192953 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-systemd-units\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.192960 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-host-cni-bin\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.192977 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-run-ovn\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.193000 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c735ed2a-c0e6-4833-a21d-4adb485e1101-ovn-node-metrics-cert\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.193027 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-host-run-ovn-kubernetes\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.193052 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c735ed2a-c0e6-4833-a21d-4adb485e1101-ovnkube-script-lib\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.193097 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-host-slash\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.193120 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c735ed2a-c0e6-4833-a21d-4adb485e1101-ovnkube-config\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.193146 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-host-cni-netd\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.193184 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-host-kubelet\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.193210 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-etc-openvswitch\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.193275 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-run-systemd\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.193335 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-run-openvswitch\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.193387 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.193426 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-log-socket\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.193455 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c735ed2a-c0e6-4833-a21d-4adb485e1101-env-overrides\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.193669 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-var-lib-openvswitch\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.194201 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c735ed2a-c0e6-4833-a21d-4adb485e1101-env-overrides\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.194249 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-host-cni-netd\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.194298 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-host-kubelet\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.194329 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-etc-openvswitch\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.194356 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-run-systemd\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.194416 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.194464 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-systemd-units\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.194807 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-log-socket\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.194872 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-host-slash\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.194902 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-host-run-ovn-kubernetes\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.194932 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-run-ovn\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.195008 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c735ed2a-c0e6-4833-a21d-4adb485e1101-ovnkube-script-lib\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.195077 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c735ed2a-c0e6-4833-a21d-4adb485e1101-run-openvswitch\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.195447 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c735ed2a-c0e6-4833-a21d-4adb485e1101-ovnkube-config\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.197197 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c735ed2a-c0e6-4833-a21d-4adb485e1101-ovn-node-metrics-cert\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.206901 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbwpt\" (UniqueName: \"kubernetes.io/projected/c735ed2a-c0e6-4833-a21d-4adb485e1101-kube-api-access-jbwpt\") pod \"ovnkube-node-5vhjh\" (UID: \"c735ed2a-c0e6-4833-a21d-4adb485e1101\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.253557 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.449336 4886 generic.go:334] "Generic (PLEG): container finished" podID="c735ed2a-c0e6-4833-a21d-4adb485e1101" containerID="ac17db94a76fd5a49e3ee108683c77736a6f2a12e5ef3342b96a0d9642405456" exitCode=0 Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.449400 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" event={"ID":"c735ed2a-c0e6-4833-a21d-4adb485e1101","Type":"ContainerDied","Data":"ac17db94a76fd5a49e3ee108683c77736a6f2a12e5ef3342b96a0d9642405456"} Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.449428 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" event={"ID":"c735ed2a-c0e6-4833-a21d-4adb485e1101","Type":"ContainerStarted","Data":"be46fc3e594102fb21362edea9790da9238b0d5a3644efc4b7a9ee194d9596ae"} Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.450703 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rnffz_83f8fca5-68c6-4300-b2d8-64a58bf92a64/kube-multus/2.log" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.455450 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nclwh_87d8f125-379b-4e5a-bedc-b55cf9edb00a/ovn-acl-logging/0.log" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.455937 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nclwh_87d8f125-379b-4e5a-bedc-b55cf9edb00a/ovn-controller/0.log" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.456299 4886 generic.go:334] "Generic (PLEG): container finished" podID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerID="89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a" exitCode=0 Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.456332 4886 generic.go:334] "Generic (PLEG): container finished" podID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" containerID="d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379" exitCode=0 Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.456358 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerDied","Data":"89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a"} Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.456384 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerDied","Data":"d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379"} Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.456391 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.456407 4886 scope.go:117] "RemoveContainer" containerID="ed1f61bce5fe0d72a072fa0720c7c32c59b08203d401027006d86b34f26ec8a5" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.456395 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nclwh" event={"ID":"87d8f125-379b-4e5a-bedc-b55cf9edb00a","Type":"ContainerDied","Data":"c7a0db8c82dc24886b687b7a4b2e408a2f81f598d81136afc0179e7146435526"} Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.482596 4886 scope.go:117] "RemoveContainer" containerID="76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.506275 4886 scope.go:117] "RemoveContainer" containerID="f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.527150 4886 scope.go:117] "RemoveContainer" containerID="82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.536582 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nclwh"] Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.543610 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nclwh"] Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.549147 4886 scope.go:117] "RemoveContainer" containerID="89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.600926 4886 scope.go:117] "RemoveContainer" containerID="d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.628560 4886 scope.go:117] "RemoveContainer" containerID="2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.652172 4886 scope.go:117] "RemoveContainer" containerID="8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.664707 4886 scope.go:117] "RemoveContainer" containerID="637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.681623 4886 scope.go:117] "RemoveContainer" containerID="ed1f61bce5fe0d72a072fa0720c7c32c59b08203d401027006d86b34f26ec8a5" Feb 19 21:08:59 crc kubenswrapper[4886]: E0219 21:08:59.682433 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed1f61bce5fe0d72a072fa0720c7c32c59b08203d401027006d86b34f26ec8a5\": container with ID starting with ed1f61bce5fe0d72a072fa0720c7c32c59b08203d401027006d86b34f26ec8a5 not found: ID does not exist" containerID="ed1f61bce5fe0d72a072fa0720c7c32c59b08203d401027006d86b34f26ec8a5" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.682479 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed1f61bce5fe0d72a072fa0720c7c32c59b08203d401027006d86b34f26ec8a5"} err="failed to get container status \"ed1f61bce5fe0d72a072fa0720c7c32c59b08203d401027006d86b34f26ec8a5\": rpc error: code = NotFound desc = could not find container \"ed1f61bce5fe0d72a072fa0720c7c32c59b08203d401027006d86b34f26ec8a5\": container with ID starting with ed1f61bce5fe0d72a072fa0720c7c32c59b08203d401027006d86b34f26ec8a5 not found: ID does not exist" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.682505 4886 scope.go:117] "RemoveContainer" containerID="76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548" Feb 19 21:08:59 crc kubenswrapper[4886]: E0219 21:08:59.682930 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\": container with ID starting with 76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548 not found: ID does not exist" containerID="76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.682969 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548"} err="failed to get container status \"76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\": rpc error: code = NotFound desc = could not find container \"76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\": container with ID starting with 76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548 not found: ID does not exist" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.683027 4886 scope.go:117] "RemoveContainer" containerID="f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16" Feb 19 21:08:59 crc kubenswrapper[4886]: E0219 21:08:59.683375 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\": container with ID starting with f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16 not found: ID does not exist" containerID="f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.683401 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16"} err="failed to get container status \"f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\": rpc error: code = NotFound desc = could not find container \"f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\": container with ID starting with f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16 not found: ID does not exist" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.683417 4886 scope.go:117] "RemoveContainer" containerID="82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828" Feb 19 21:08:59 crc kubenswrapper[4886]: E0219 21:08:59.683655 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\": container with ID starting with 82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828 not found: ID does not exist" containerID="82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.683678 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828"} err="failed to get container status \"82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\": rpc error: code = NotFound desc = could not find container \"82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\": container with ID starting with 82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828 not found: ID does not exist" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.683695 4886 scope.go:117] "RemoveContainer" containerID="89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a" Feb 19 21:08:59 crc kubenswrapper[4886]: E0219 21:08:59.684005 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\": container with ID starting with 89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a not found: ID does not exist" containerID="89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.684043 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a"} err="failed to get container status \"89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\": rpc error: code = NotFound desc = could not find container \"89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\": container with ID starting with 89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a not found: ID does not exist" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.684087 4886 scope.go:117] "RemoveContainer" containerID="d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379" Feb 19 21:08:59 crc kubenswrapper[4886]: E0219 21:08:59.684386 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\": container with ID starting with d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379 not found: ID does not exist" containerID="d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.684410 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379"} err="failed to get container status \"d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\": rpc error: code = NotFound desc = could not find container \"d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\": container with ID starting with d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379 not found: ID does not exist" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.684424 4886 scope.go:117] "RemoveContainer" containerID="2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e" Feb 19 21:08:59 crc kubenswrapper[4886]: E0219 21:08:59.684648 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\": container with ID starting with 2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e not found: ID does not exist" containerID="2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.684681 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e"} err="failed to get container status \"2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\": rpc error: code = NotFound desc = could not find container \"2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\": container with ID starting with 2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e not found: ID does not exist" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.684704 4886 scope.go:117] "RemoveContainer" containerID="8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0" Feb 19 21:08:59 crc kubenswrapper[4886]: E0219 21:08:59.685040 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\": container with ID starting with 8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0 not found: ID does not exist" containerID="8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.685071 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0"} err="failed to get container status \"8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\": rpc error: code = NotFound desc = could not find container \"8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\": container with ID starting with 8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0 not found: ID does not exist" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.685088 4886 scope.go:117] "RemoveContainer" containerID="637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1" Feb 19 21:08:59 crc kubenswrapper[4886]: E0219 21:08:59.686505 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\": container with ID starting with 637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1 not found: ID does not exist" containerID="637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.686534 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1"} err="failed to get container status \"637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\": rpc error: code = NotFound desc = could not find container \"637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\": container with ID starting with 637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1 not found: ID does not exist" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.686547 4886 scope.go:117] "RemoveContainer" containerID="ed1f61bce5fe0d72a072fa0720c7c32c59b08203d401027006d86b34f26ec8a5" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.686750 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed1f61bce5fe0d72a072fa0720c7c32c59b08203d401027006d86b34f26ec8a5"} err="failed to get container status \"ed1f61bce5fe0d72a072fa0720c7c32c59b08203d401027006d86b34f26ec8a5\": rpc error: code = NotFound desc = could not find container \"ed1f61bce5fe0d72a072fa0720c7c32c59b08203d401027006d86b34f26ec8a5\": container with ID starting with ed1f61bce5fe0d72a072fa0720c7c32c59b08203d401027006d86b34f26ec8a5 not found: ID does not exist" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.686774 4886 scope.go:117] "RemoveContainer" containerID="76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.687090 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548"} err="failed to get container status \"76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\": rpc error: code = NotFound desc = could not find container \"76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548\": container with ID starting with 76d1ac64f54eb1de6f51295cadca4f3e796ea31f4e3fcf6b753dd86b72bf1548 not found: ID does not exist" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.687119 4886 scope.go:117] "RemoveContainer" containerID="f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.687352 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16"} err="failed to get container status \"f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\": rpc error: code = NotFound desc = could not find container \"f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16\": container with ID starting with f1f52c6e1b7a0a83099abe610175855495562e19b540b42bbbea3ee0125e1a16 not found: ID does not exist" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.687378 4886 scope.go:117] "RemoveContainer" containerID="82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.687705 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828"} err="failed to get container status \"82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\": rpc error: code = NotFound desc = could not find container \"82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828\": container with ID starting with 82ee37972af1e243f57c78221af32f97b246fc7db6d72d7ed169612bc833a828 not found: ID does not exist" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.687725 4886 scope.go:117] "RemoveContainer" containerID="89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.687952 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a"} err="failed to get container status \"89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\": rpc error: code = NotFound desc = could not find container \"89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a\": container with ID starting with 89888885a8699b820880e82c3782199e9f1a5f40f446473c5cb181c7530f266a not found: ID does not exist" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.687977 4886 scope.go:117] "RemoveContainer" containerID="d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.688180 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379"} err="failed to get container status \"d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\": rpc error: code = NotFound desc = could not find container \"d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379\": container with ID starting with d24aa0acc93dd088f8610e6e08fea547b604ebfa8e8848f0b683a0bc45c03379 not found: ID does not exist" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.688197 4886 scope.go:117] "RemoveContainer" containerID="2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.688422 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e"} err="failed to get container status \"2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\": rpc error: code = NotFound desc = could not find container \"2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e\": container with ID starting with 2c0eab267d53659a29a57ebf638fb61a7477f546b0922d46c00d4943ddeb242e not found: ID does not exist" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.688439 4886 scope.go:117] "RemoveContainer" containerID="8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.688622 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0"} err="failed to get container status \"8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\": rpc error: code = NotFound desc = could not find container \"8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0\": container with ID starting with 8ed5c1646cdd13c4bd7fae9509b38c88912aa5f0e7db8371c7b824c512bd3ec0 not found: ID does not exist" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.688640 4886 scope.go:117] "RemoveContainer" containerID="637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1" Feb 19 21:08:59 crc kubenswrapper[4886]: I0219 21:08:59.688832 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1"} err="failed to get container status \"637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\": rpc error: code = NotFound desc = could not find container \"637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1\": container with ID starting with 637e6e9ebc327bc819e40d040ddc2c9d20ee8a44882320d036bea56dc8612aa1 not found: ID does not exist" Feb 19 21:09:00 crc kubenswrapper[4886]: I0219 21:09:00.463933 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" event={"ID":"c735ed2a-c0e6-4833-a21d-4adb485e1101","Type":"ContainerStarted","Data":"de1c5e09cf8fa0e0541d8b0dbc41d143c9df27503810dc7add760fddf697179d"} Feb 19 21:09:00 crc kubenswrapper[4886]: I0219 21:09:00.464193 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" event={"ID":"c735ed2a-c0e6-4833-a21d-4adb485e1101","Type":"ContainerStarted","Data":"def7ef28920749908e5bbc12608de62f48567f8f777318b9afb8073dbed63f1b"} Feb 19 21:09:00 crc kubenswrapper[4886]: I0219 21:09:00.464208 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" event={"ID":"c735ed2a-c0e6-4833-a21d-4adb485e1101","Type":"ContainerStarted","Data":"2a22eedaad556cd72e8da8e2a2f04ae1f7ccb710a225e7e5ca89405dd72d2994"} Feb 19 21:09:00 crc kubenswrapper[4886]: I0219 21:09:00.464216 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" event={"ID":"c735ed2a-c0e6-4833-a21d-4adb485e1101","Type":"ContainerStarted","Data":"3bd77e368eea17b51e3596eec4552aa25a65cdf366c4fe0df660292fd4b8439f"} Feb 19 21:09:00 crc kubenswrapper[4886]: I0219 21:09:00.464226 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" event={"ID":"c735ed2a-c0e6-4833-a21d-4adb485e1101","Type":"ContainerStarted","Data":"69d23ae1dc60a3a779ddf0771a1d675ee352672699d3294514a12d26aa271322"} Feb 19 21:09:00 crc kubenswrapper[4886]: I0219 21:09:00.464234 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" event={"ID":"c735ed2a-c0e6-4833-a21d-4adb485e1101","Type":"ContainerStarted","Data":"7188c63cf84d5aed7d54fb145b2bc8b0b887f45448fe16a7129ebef76e276a1b"} Feb 19 21:09:00 crc kubenswrapper[4886]: I0219 21:09:00.608719 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87d8f125-379b-4e5a-bedc-b55cf9edb00a" path="/var/lib/kubelet/pods/87d8f125-379b-4e5a-bedc-b55cf9edb00a/volumes" Feb 19 21:09:02 crc kubenswrapper[4886]: I0219 21:09:02.477469 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" event={"ID":"c735ed2a-c0e6-4833-a21d-4adb485e1101","Type":"ContainerStarted","Data":"f21824176bb8d64eddda5076f94ee64ba39adb7d41ab620186c5bf47f272b3d7"} Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.041777 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw"] Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.042608 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.044963 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.047378 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.047695 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-97c24" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.161712 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf"] Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.162734 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.164323 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-62vkh" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.164399 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.165688 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx"] Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.166456 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.198562 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x59ln\" (UniqueName: \"kubernetes.io/projected/30a3eec6-660b-4a66-a96b-a63626faf87e-kube-api-access-x59ln\") pod \"obo-prometheus-operator-68bc856cb9-7zllw\" (UID: \"30a3eec6-660b-4a66-a96b-a63626faf87e\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.299938 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/487e178a-7f42-4686-a24a-1eb0459cdde3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx\" (UID: \"487e178a-7f42-4686-a24a-1eb0459cdde3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.300398 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/487e178a-7f42-4686-a24a-1eb0459cdde3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx\" (UID: \"487e178a-7f42-4686-a24a-1eb0459cdde3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.300461 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b502ab55-dc43-4a02-b47f-5fcd86f3a827-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf\" (UID: \"b502ab55-dc43-4a02-b47f-5fcd86f3a827\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.300689 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x59ln\" (UniqueName: \"kubernetes.io/projected/30a3eec6-660b-4a66-a96b-a63626faf87e-kube-api-access-x59ln\") pod \"obo-prometheus-operator-68bc856cb9-7zllw\" (UID: \"30a3eec6-660b-4a66-a96b-a63626faf87e\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.300744 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b502ab55-dc43-4a02-b47f-5fcd86f3a827-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf\" (UID: \"b502ab55-dc43-4a02-b47f-5fcd86f3a827\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.323382 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x59ln\" (UniqueName: \"kubernetes.io/projected/30a3eec6-660b-4a66-a96b-a63626faf87e-kube-api-access-x59ln\") pod \"obo-prometheus-operator-68bc856cb9-7zllw\" (UID: \"30a3eec6-660b-4a66-a96b-a63626faf87e\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.357037 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw" Feb 19 21:09:04 crc kubenswrapper[4886]: E0219 21:09:04.386354 4886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-7zllw_openshift-operators_30a3eec6-660b-4a66-a96b-a63626faf87e_0(455f58931ac5b1b7329a775b9a82aca67f66e3822b86daa040100f7decfe3fc7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 21:09:04 crc kubenswrapper[4886]: E0219 21:09:04.386422 4886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-7zllw_openshift-operators_30a3eec6-660b-4a66-a96b-a63626faf87e_0(455f58931ac5b1b7329a775b9a82aca67f66e3822b86daa040100f7decfe3fc7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw" Feb 19 21:09:04 crc kubenswrapper[4886]: E0219 21:09:04.386444 4886 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-7zllw_openshift-operators_30a3eec6-660b-4a66-a96b-a63626faf87e_0(455f58931ac5b1b7329a775b9a82aca67f66e3822b86daa040100f7decfe3fc7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw" Feb 19 21:09:04 crc kubenswrapper[4886]: E0219 21:09:04.386490 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-7zllw_openshift-operators(30a3eec6-660b-4a66-a96b-a63626faf87e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-7zllw_openshift-operators(30a3eec6-660b-4a66-a96b-a63626faf87e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-7zllw_openshift-operators_30a3eec6-660b-4a66-a96b-a63626faf87e_0(455f58931ac5b1b7329a775b9a82aca67f66e3822b86daa040100f7decfe3fc7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw" podUID="30a3eec6-660b-4a66-a96b-a63626faf87e" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.401900 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b502ab55-dc43-4a02-b47f-5fcd86f3a827-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf\" (UID: \"b502ab55-dc43-4a02-b47f-5fcd86f3a827\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.401988 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b502ab55-dc43-4a02-b47f-5fcd86f3a827-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf\" (UID: \"b502ab55-dc43-4a02-b47f-5fcd86f3a827\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.402020 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/487e178a-7f42-4686-a24a-1eb0459cdde3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx\" (UID: \"487e178a-7f42-4686-a24a-1eb0459cdde3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.402056 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/487e178a-7f42-4686-a24a-1eb0459cdde3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx\" (UID: \"487e178a-7f42-4686-a24a-1eb0459cdde3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.406759 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/487e178a-7f42-4686-a24a-1eb0459cdde3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx\" (UID: \"487e178a-7f42-4686-a24a-1eb0459cdde3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.408667 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/487e178a-7f42-4686-a24a-1eb0459cdde3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx\" (UID: \"487e178a-7f42-4686-a24a-1eb0459cdde3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.409151 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b502ab55-dc43-4a02-b47f-5fcd86f3a827-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf\" (UID: \"b502ab55-dc43-4a02-b47f-5fcd86f3a827\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.409725 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b502ab55-dc43-4a02-b47f-5fcd86f3a827-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf\" (UID: \"b502ab55-dc43-4a02-b47f-5fcd86f3a827\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.420430 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-pnltp"] Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.421135 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.423427 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-s5sjv" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.423454 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.479499 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.491831 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.502977 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7trx\" (UniqueName: \"kubernetes.io/projected/6d5c4d2c-ca63-4cba-9ea5-fba7281716b4-kube-api-access-k7trx\") pod \"observability-operator-59bdc8b94-pnltp\" (UID: \"6d5c4d2c-ca63-4cba-9ea5-fba7281716b4\") " pod="openshift-operators/observability-operator-59bdc8b94-pnltp" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.503046 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d5c4d2c-ca63-4cba-9ea5-fba7281716b4-observability-operator-tls\") pod \"observability-operator-59bdc8b94-pnltp\" (UID: \"6d5c4d2c-ca63-4cba-9ea5-fba7281716b4\") " pod="openshift-operators/observability-operator-59bdc8b94-pnltp" Feb 19 21:09:04 crc kubenswrapper[4886]: E0219 21:09:04.521313 4886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf_openshift-operators_b502ab55-dc43-4a02-b47f-5fcd86f3a827_0(7eaa270ad425f8bf9f9df6193fdc3dc7115eaf140b79a1489f3ba99fd5d915e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 21:09:04 crc kubenswrapper[4886]: E0219 21:09:04.521380 4886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf_openshift-operators_b502ab55-dc43-4a02-b47f-5fcd86f3a827_0(7eaa270ad425f8bf9f9df6193fdc3dc7115eaf140b79a1489f3ba99fd5d915e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" Feb 19 21:09:04 crc kubenswrapper[4886]: E0219 21:09:04.521403 4886 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf_openshift-operators_b502ab55-dc43-4a02-b47f-5fcd86f3a827_0(7eaa270ad425f8bf9f9df6193fdc3dc7115eaf140b79a1489f3ba99fd5d915e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" Feb 19 21:09:04 crc kubenswrapper[4886]: E0219 21:09:04.521450 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf_openshift-operators(b502ab55-dc43-4a02-b47f-5fcd86f3a827)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf_openshift-operators(b502ab55-dc43-4a02-b47f-5fcd86f3a827)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf_openshift-operators_b502ab55-dc43-4a02-b47f-5fcd86f3a827_0(7eaa270ad425f8bf9f9df6193fdc3dc7115eaf140b79a1489f3ba99fd5d915e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" podUID="b502ab55-dc43-4a02-b47f-5fcd86f3a827" Feb 19 21:09:04 crc kubenswrapper[4886]: E0219 21:09:04.536439 4886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx_openshift-operators_487e178a-7f42-4686-a24a-1eb0459cdde3_0(3e20078638cf613f5033b5b0a7e40edcafd1a8b8a63f8e2a21059113e8d5eb59): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 21:09:04 crc kubenswrapper[4886]: E0219 21:09:04.536503 4886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx_openshift-operators_487e178a-7f42-4686-a24a-1eb0459cdde3_0(3e20078638cf613f5033b5b0a7e40edcafd1a8b8a63f8e2a21059113e8d5eb59): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" Feb 19 21:09:04 crc kubenswrapper[4886]: E0219 21:09:04.536524 4886 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx_openshift-operators_487e178a-7f42-4686-a24a-1eb0459cdde3_0(3e20078638cf613f5033b5b0a7e40edcafd1a8b8a63f8e2a21059113e8d5eb59): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" Feb 19 21:09:04 crc kubenswrapper[4886]: E0219 21:09:04.537023 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx_openshift-operators(487e178a-7f42-4686-a24a-1eb0459cdde3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx_openshift-operators(487e178a-7f42-4686-a24a-1eb0459cdde3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx_openshift-operators_487e178a-7f42-4686-a24a-1eb0459cdde3_0(3e20078638cf613f5033b5b0a7e40edcafd1a8b8a63f8e2a21059113e8d5eb59): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" podUID="487e178a-7f42-4686-a24a-1eb0459cdde3" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.573089 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-hn4jw"] Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.573766 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.578727 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-v4ngk" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.603973 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/80fe938a-32f9-4742-ab1d-d1fafa082776-openshift-service-ca\") pod \"perses-operator-5bf474d74f-hn4jw\" (UID: \"80fe938a-32f9-4742-ab1d-d1fafa082776\") " pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.604367 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7trx\" (UniqueName: \"kubernetes.io/projected/6d5c4d2c-ca63-4cba-9ea5-fba7281716b4-kube-api-access-k7trx\") pod \"observability-operator-59bdc8b94-pnltp\" (UID: \"6d5c4d2c-ca63-4cba-9ea5-fba7281716b4\") " pod="openshift-operators/observability-operator-59bdc8b94-pnltp" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.604399 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7htmw\" (UniqueName: \"kubernetes.io/projected/80fe938a-32f9-4742-ab1d-d1fafa082776-kube-api-access-7htmw\") pod \"perses-operator-5bf474d74f-hn4jw\" (UID: \"80fe938a-32f9-4742-ab1d-d1fafa082776\") " pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.604448 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d5c4d2c-ca63-4cba-9ea5-fba7281716b4-observability-operator-tls\") pod \"observability-operator-59bdc8b94-pnltp\" (UID: \"6d5c4d2c-ca63-4cba-9ea5-fba7281716b4\") " pod="openshift-operators/observability-operator-59bdc8b94-pnltp" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.609940 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d5c4d2c-ca63-4cba-9ea5-fba7281716b4-observability-operator-tls\") pod \"observability-operator-59bdc8b94-pnltp\" (UID: \"6d5c4d2c-ca63-4cba-9ea5-fba7281716b4\") " pod="openshift-operators/observability-operator-59bdc8b94-pnltp" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.630983 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7trx\" (UniqueName: \"kubernetes.io/projected/6d5c4d2c-ca63-4cba-9ea5-fba7281716b4-kube-api-access-k7trx\") pod \"observability-operator-59bdc8b94-pnltp\" (UID: \"6d5c4d2c-ca63-4cba-9ea5-fba7281716b4\") " pod="openshift-operators/observability-operator-59bdc8b94-pnltp" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.712495 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7htmw\" (UniqueName: \"kubernetes.io/projected/80fe938a-32f9-4742-ab1d-d1fafa082776-kube-api-access-7htmw\") pod \"perses-operator-5bf474d74f-hn4jw\" (UID: \"80fe938a-32f9-4742-ab1d-d1fafa082776\") " pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.712582 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/80fe938a-32f9-4742-ab1d-d1fafa082776-openshift-service-ca\") pod \"perses-operator-5bf474d74f-hn4jw\" (UID: \"80fe938a-32f9-4742-ab1d-d1fafa082776\") " pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.713421 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/80fe938a-32f9-4742-ab1d-d1fafa082776-openshift-service-ca\") pod \"perses-operator-5bf474d74f-hn4jw\" (UID: \"80fe938a-32f9-4742-ab1d-d1fafa082776\") " pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.739334 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7htmw\" (UniqueName: \"kubernetes.io/projected/80fe938a-32f9-4742-ab1d-d1fafa082776-kube-api-access-7htmw\") pod \"perses-operator-5bf474d74f-hn4jw\" (UID: \"80fe938a-32f9-4742-ab1d-d1fafa082776\") " pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.766055 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" Feb 19 21:09:04 crc kubenswrapper[4886]: E0219 21:09:04.793546 4886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-pnltp_openshift-operators_6d5c4d2c-ca63-4cba-9ea5-fba7281716b4_0(cc4495066fc77712d897f80d0b4c0da92f5e975c341fa2046c4062f3146ef637): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 21:09:04 crc kubenswrapper[4886]: E0219 21:09:04.793627 4886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-pnltp_openshift-operators_6d5c4d2c-ca63-4cba-9ea5-fba7281716b4_0(cc4495066fc77712d897f80d0b4c0da92f5e975c341fa2046c4062f3146ef637): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" Feb 19 21:09:04 crc kubenswrapper[4886]: E0219 21:09:04.793659 4886 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-pnltp_openshift-operators_6d5c4d2c-ca63-4cba-9ea5-fba7281716b4_0(cc4495066fc77712d897f80d0b4c0da92f5e975c341fa2046c4062f3146ef637): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" Feb 19 21:09:04 crc kubenswrapper[4886]: E0219 21:09:04.793724 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-pnltp_openshift-operators(6d5c4d2c-ca63-4cba-9ea5-fba7281716b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-pnltp_openshift-operators(6d5c4d2c-ca63-4cba-9ea5-fba7281716b4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-pnltp_openshift-operators_6d5c4d2c-ca63-4cba-9ea5-fba7281716b4_0(cc4495066fc77712d897f80d0b4c0da92f5e975c341fa2046c4062f3146ef637): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" podUID="6d5c4d2c-ca63-4cba-9ea5-fba7281716b4" Feb 19 21:09:04 crc kubenswrapper[4886]: I0219 21:09:04.889447 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" Feb 19 21:09:04 crc kubenswrapper[4886]: E0219 21:09:04.910542 4886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-hn4jw_openshift-operators_80fe938a-32f9-4742-ab1d-d1fafa082776_0(cdcba1d8f6d6c343a5131d73c3182d47a401dca44567f7e1613e14c2ca6d6e52): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 21:09:04 crc kubenswrapper[4886]: E0219 21:09:04.910613 4886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-hn4jw_openshift-operators_80fe938a-32f9-4742-ab1d-d1fafa082776_0(cdcba1d8f6d6c343a5131d73c3182d47a401dca44567f7e1613e14c2ca6d6e52): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" Feb 19 21:09:04 crc kubenswrapper[4886]: E0219 21:09:04.910635 4886 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-hn4jw_openshift-operators_80fe938a-32f9-4742-ab1d-d1fafa082776_0(cdcba1d8f6d6c343a5131d73c3182d47a401dca44567f7e1613e14c2ca6d6e52): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" Feb 19 21:09:04 crc kubenswrapper[4886]: E0219 21:09:04.910684 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-hn4jw_openshift-operators(80fe938a-32f9-4742-ab1d-d1fafa082776)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-hn4jw_openshift-operators(80fe938a-32f9-4742-ab1d-d1fafa082776)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-hn4jw_openshift-operators_80fe938a-32f9-4742-ab1d-d1fafa082776_0(cdcba1d8f6d6c343a5131d73c3182d47a401dca44567f7e1613e14c2ca6d6e52): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" podUID="80fe938a-32f9-4742-ab1d-d1fafa082776" Feb 19 21:09:05 crc kubenswrapper[4886]: I0219 21:09:05.427521 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx"] Feb 19 21:09:05 crc kubenswrapper[4886]: I0219 21:09:05.433301 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw"] Feb 19 21:09:05 crc kubenswrapper[4886]: I0219 21:09:05.433402 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw" Feb 19 21:09:05 crc kubenswrapper[4886]: I0219 21:09:05.433924 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw" Feb 19 21:09:05 crc kubenswrapper[4886]: I0219 21:09:05.456479 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf"] Feb 19 21:09:05 crc kubenswrapper[4886]: E0219 21:09:05.468828 4886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-7zllw_openshift-operators_30a3eec6-660b-4a66-a96b-a63626faf87e_0(221567c994450aab55155cf94e4bf82fbb945a29040ca14451f5271320482d0b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 21:09:05 crc kubenswrapper[4886]: E0219 21:09:05.468930 4886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-7zllw_openshift-operators_30a3eec6-660b-4a66-a96b-a63626faf87e_0(221567c994450aab55155cf94e4bf82fbb945a29040ca14451f5271320482d0b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw" Feb 19 21:09:05 crc kubenswrapper[4886]: E0219 21:09:05.468953 4886 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-7zllw_openshift-operators_30a3eec6-660b-4a66-a96b-a63626faf87e_0(221567c994450aab55155cf94e4bf82fbb945a29040ca14451f5271320482d0b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw" Feb 19 21:09:05 crc kubenswrapper[4886]: E0219 21:09:05.468995 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-7zllw_openshift-operators(30a3eec6-660b-4a66-a96b-a63626faf87e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-7zllw_openshift-operators(30a3eec6-660b-4a66-a96b-a63626faf87e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-7zllw_openshift-operators_30a3eec6-660b-4a66-a96b-a63626faf87e_0(221567c994450aab55155cf94e4bf82fbb945a29040ca14451f5271320482d0b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw" podUID="30a3eec6-660b-4a66-a96b-a63626faf87e" Feb 19 21:09:05 crc kubenswrapper[4886]: I0219 21:09:05.478002 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-hn4jw"] Feb 19 21:09:05 crc kubenswrapper[4886]: I0219 21:09:05.482938 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-pnltp"] Feb 19 21:09:05 crc kubenswrapper[4886]: I0219 21:09:05.530066 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" Feb 19 21:09:05 crc kubenswrapper[4886]: I0219 21:09:05.530414 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" event={"ID":"c735ed2a-c0e6-4833-a21d-4adb485e1101","Type":"ContainerStarted","Data":"096f5ae7138719a2f527f5dc4287192cb02fb7b1f208edee48bd9f5057036b7d"} Feb 19 21:09:05 crc kubenswrapper[4886]: I0219 21:09:05.530689 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" Feb 19 21:09:05 crc kubenswrapper[4886]: I0219 21:09:05.530711 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" Feb 19 21:09:05 crc kubenswrapper[4886]: I0219 21:09:05.530728 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" Feb 19 21:09:05 crc kubenswrapper[4886]: I0219 21:09:05.530906 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:09:05 crc kubenswrapper[4886]: I0219 21:09:05.531449 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" Feb 19 21:09:05 crc kubenswrapper[4886]: I0219 21:09:05.531450 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" Feb 19 21:09:05 crc kubenswrapper[4886]: I0219 21:09:05.531667 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" Feb 19 21:09:05 crc kubenswrapper[4886]: I0219 21:09:05.531875 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:09:05 crc kubenswrapper[4886]: I0219 21:09:05.531895 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:09:05 crc kubenswrapper[4886]: I0219 21:09:05.532133 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" Feb 19 21:09:05 crc kubenswrapper[4886]: I0219 21:09:05.559157 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:09:05 crc kubenswrapper[4886]: I0219 21:09:05.564547 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" podStartSLOduration=7.564527255 podStartE2EDuration="7.564527255s" podCreationTimestamp="2026-02-19 21:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:09:05.558502647 +0000 UTC m=+576.186345707" watchObservedRunningTime="2026-02-19 21:09:05.564527255 +0000 UTC m=+576.192370305" Feb 19 21:09:05 crc kubenswrapper[4886]: I0219 21:09:05.595546 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:09:05 crc kubenswrapper[4886]: E0219 21:09:05.609684 4886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-hn4jw_openshift-operators_80fe938a-32f9-4742-ab1d-d1fafa082776_0(5adf62ace211c4975b793514ddbce25f9c18b021373fa5dc33e65eda0bc28dc4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 21:09:05 crc kubenswrapper[4886]: E0219 21:09:05.609741 4886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-hn4jw_openshift-operators_80fe938a-32f9-4742-ab1d-d1fafa082776_0(5adf62ace211c4975b793514ddbce25f9c18b021373fa5dc33e65eda0bc28dc4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" Feb 19 21:09:05 crc kubenswrapper[4886]: E0219 21:09:05.609763 4886 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-hn4jw_openshift-operators_80fe938a-32f9-4742-ab1d-d1fafa082776_0(5adf62ace211c4975b793514ddbce25f9c18b021373fa5dc33e65eda0bc28dc4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" Feb 19 21:09:05 crc kubenswrapper[4886]: E0219 21:09:05.609801 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-hn4jw_openshift-operators(80fe938a-32f9-4742-ab1d-d1fafa082776)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-hn4jw_openshift-operators(80fe938a-32f9-4742-ab1d-d1fafa082776)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-hn4jw_openshift-operators_80fe938a-32f9-4742-ab1d-d1fafa082776_0(5adf62ace211c4975b793514ddbce25f9c18b021373fa5dc33e65eda0bc28dc4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" podUID="80fe938a-32f9-4742-ab1d-d1fafa082776" Feb 19 21:09:05 crc kubenswrapper[4886]: E0219 21:09:05.620461 4886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-pnltp_openshift-operators_6d5c4d2c-ca63-4cba-9ea5-fba7281716b4_0(391c8f45bb3f4b81e161a0ee6bedfe351760914313d0ec699fb3e7b0ffec69a1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 21:09:05 crc kubenswrapper[4886]: E0219 21:09:05.620524 4886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-pnltp_openshift-operators_6d5c4d2c-ca63-4cba-9ea5-fba7281716b4_0(391c8f45bb3f4b81e161a0ee6bedfe351760914313d0ec699fb3e7b0ffec69a1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" Feb 19 21:09:05 crc kubenswrapper[4886]: E0219 21:09:05.620544 4886 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-pnltp_openshift-operators_6d5c4d2c-ca63-4cba-9ea5-fba7281716b4_0(391c8f45bb3f4b81e161a0ee6bedfe351760914313d0ec699fb3e7b0ffec69a1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" Feb 19 21:09:05 crc kubenswrapper[4886]: E0219 21:09:05.620592 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-pnltp_openshift-operators(6d5c4d2c-ca63-4cba-9ea5-fba7281716b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-pnltp_openshift-operators(6d5c4d2c-ca63-4cba-9ea5-fba7281716b4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-pnltp_openshift-operators_6d5c4d2c-ca63-4cba-9ea5-fba7281716b4_0(391c8f45bb3f4b81e161a0ee6bedfe351760914313d0ec699fb3e7b0ffec69a1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" podUID="6d5c4d2c-ca63-4cba-9ea5-fba7281716b4" Feb 19 21:09:05 crc kubenswrapper[4886]: E0219 21:09:05.622090 4886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf_openshift-operators_b502ab55-dc43-4a02-b47f-5fcd86f3a827_0(347277f2eaa4ab24eb38155b7867c9f5b16e86342f5ff2e6e451b7bf89d6efea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 21:09:05 crc kubenswrapper[4886]: E0219 21:09:05.622183 4886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf_openshift-operators_b502ab55-dc43-4a02-b47f-5fcd86f3a827_0(347277f2eaa4ab24eb38155b7867c9f5b16e86342f5ff2e6e451b7bf89d6efea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" Feb 19 21:09:05 crc kubenswrapper[4886]: E0219 21:09:05.622207 4886 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf_openshift-operators_b502ab55-dc43-4a02-b47f-5fcd86f3a827_0(347277f2eaa4ab24eb38155b7867c9f5b16e86342f5ff2e6e451b7bf89d6efea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" Feb 19 21:09:05 crc kubenswrapper[4886]: E0219 21:09:05.622249 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf_openshift-operators(b502ab55-dc43-4a02-b47f-5fcd86f3a827)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf_openshift-operators(b502ab55-dc43-4a02-b47f-5fcd86f3a827)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf_openshift-operators_b502ab55-dc43-4a02-b47f-5fcd86f3a827_0(347277f2eaa4ab24eb38155b7867c9f5b16e86342f5ff2e6e451b7bf89d6efea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" podUID="b502ab55-dc43-4a02-b47f-5fcd86f3a827" Feb 19 21:09:05 crc kubenswrapper[4886]: E0219 21:09:05.633849 4886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx_openshift-operators_487e178a-7f42-4686-a24a-1eb0459cdde3_0(293c1661cab85e9e6ef023e24f4803183f3409617ec9bacb8ba9e72b37ec02df): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 21:09:05 crc kubenswrapper[4886]: E0219 21:09:05.633906 4886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx_openshift-operators_487e178a-7f42-4686-a24a-1eb0459cdde3_0(293c1661cab85e9e6ef023e24f4803183f3409617ec9bacb8ba9e72b37ec02df): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" Feb 19 21:09:05 crc kubenswrapper[4886]: E0219 21:09:05.633931 4886 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx_openshift-operators_487e178a-7f42-4686-a24a-1eb0459cdde3_0(293c1661cab85e9e6ef023e24f4803183f3409617ec9bacb8ba9e72b37ec02df): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" Feb 19 21:09:05 crc kubenswrapper[4886]: E0219 21:09:05.633980 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx_openshift-operators(487e178a-7f42-4686-a24a-1eb0459cdde3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx_openshift-operators(487e178a-7f42-4686-a24a-1eb0459cdde3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx_openshift-operators_487e178a-7f42-4686-a24a-1eb0459cdde3_0(293c1661cab85e9e6ef023e24f4803183f3409617ec9bacb8ba9e72b37ec02df): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" podUID="487e178a-7f42-4686-a24a-1eb0459cdde3" Feb 19 21:09:12 crc kubenswrapper[4886]: I0219 21:09:12.601236 4886 scope.go:117] "RemoveContainer" containerID="b1d24b72a538ea2a24b25c0abe1e665851c85706dcd18e501c91a69c90bfa883" Feb 19 21:09:12 crc kubenswrapper[4886]: E0219 21:09:12.602174 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-rnffz_openshift-multus(83f8fca5-68c6-4300-b2d8-64a58bf92a64)\"" pod="openshift-multus/multus-rnffz" podUID="83f8fca5-68c6-4300-b2d8-64a58bf92a64" Feb 19 21:09:17 crc kubenswrapper[4886]: I0219 21:09:17.600689 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw" Feb 19 21:09:17 crc kubenswrapper[4886]: I0219 21:09:17.600756 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" Feb 19 21:09:17 crc kubenswrapper[4886]: I0219 21:09:17.601697 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw" Feb 19 21:09:17 crc kubenswrapper[4886]: I0219 21:09:17.601804 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" Feb 19 21:09:17 crc kubenswrapper[4886]: E0219 21:09:17.651569 4886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-7zllw_openshift-operators_30a3eec6-660b-4a66-a96b-a63626faf87e_0(046123d7f66209b197eea07bfa5d4176f65b23bee34bd98bf21baad3eae7206f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 21:09:17 crc kubenswrapper[4886]: E0219 21:09:17.651878 4886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-7zllw_openshift-operators_30a3eec6-660b-4a66-a96b-a63626faf87e_0(046123d7f66209b197eea07bfa5d4176f65b23bee34bd98bf21baad3eae7206f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw" Feb 19 21:09:17 crc kubenswrapper[4886]: E0219 21:09:17.651906 4886 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-7zllw_openshift-operators_30a3eec6-660b-4a66-a96b-a63626faf87e_0(046123d7f66209b197eea07bfa5d4176f65b23bee34bd98bf21baad3eae7206f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw" Feb 19 21:09:17 crc kubenswrapper[4886]: E0219 21:09:17.651973 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-7zllw_openshift-operators(30a3eec6-660b-4a66-a96b-a63626faf87e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-7zllw_openshift-operators(30a3eec6-660b-4a66-a96b-a63626faf87e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-7zllw_openshift-operators_30a3eec6-660b-4a66-a96b-a63626faf87e_0(046123d7f66209b197eea07bfa5d4176f65b23bee34bd98bf21baad3eae7206f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw" podUID="30a3eec6-660b-4a66-a96b-a63626faf87e" Feb 19 21:09:17 crc kubenswrapper[4886]: E0219 21:09:17.657072 4886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf_openshift-operators_b502ab55-dc43-4a02-b47f-5fcd86f3a827_0(81b71e7a9c879266a05a995bd4387a85fa00a30da0b9482b3d16de1fa1911359): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 21:09:17 crc kubenswrapper[4886]: E0219 21:09:17.657122 4886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf_openshift-operators_b502ab55-dc43-4a02-b47f-5fcd86f3a827_0(81b71e7a9c879266a05a995bd4387a85fa00a30da0b9482b3d16de1fa1911359): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" Feb 19 21:09:17 crc kubenswrapper[4886]: E0219 21:09:17.657145 4886 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf_openshift-operators_b502ab55-dc43-4a02-b47f-5fcd86f3a827_0(81b71e7a9c879266a05a995bd4387a85fa00a30da0b9482b3d16de1fa1911359): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" Feb 19 21:09:17 crc kubenswrapper[4886]: E0219 21:09:17.657187 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf_openshift-operators(b502ab55-dc43-4a02-b47f-5fcd86f3a827)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf_openshift-operators(b502ab55-dc43-4a02-b47f-5fcd86f3a827)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf_openshift-operators_b502ab55-dc43-4a02-b47f-5fcd86f3a827_0(81b71e7a9c879266a05a995bd4387a85fa00a30da0b9482b3d16de1fa1911359): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" podUID="b502ab55-dc43-4a02-b47f-5fcd86f3a827" Feb 19 21:09:18 crc kubenswrapper[4886]: I0219 21:09:18.325204 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:09:18 crc kubenswrapper[4886]: I0219 21:09:18.325322 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:09:18 crc kubenswrapper[4886]: I0219 21:09:18.325405 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 21:09:18 crc kubenswrapper[4886]: I0219 21:09:18.326153 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ae729cd998c06e3f4a3bc7ed90125f52db03861def52b8d2aacbec9bd8a3520"} pod="openshift-machine-config-operator/machine-config-daemon-6stm5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 21:09:18 crc kubenswrapper[4886]: I0219 21:09:18.326227 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" containerID="cri-o://5ae729cd998c06e3f4a3bc7ed90125f52db03861def52b8d2aacbec9bd8a3520" gracePeriod=600 Feb 19 21:09:18 crc kubenswrapper[4886]: I0219 21:09:18.602565 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" Feb 19 21:09:18 crc kubenswrapper[4886]: I0219 21:09:18.603837 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" Feb 19 21:09:18 crc kubenswrapper[4886]: I0219 21:09:18.639173 4886 generic.go:334] "Generic (PLEG): container finished" podID="b096c32d-4192-4529-bc55-b05d09004007" containerID="5ae729cd998c06e3f4a3bc7ed90125f52db03861def52b8d2aacbec9bd8a3520" exitCode=0 Feb 19 21:09:18 crc kubenswrapper[4886]: I0219 21:09:18.639371 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerDied","Data":"5ae729cd998c06e3f4a3bc7ed90125f52db03861def52b8d2aacbec9bd8a3520"} Feb 19 21:09:18 crc kubenswrapper[4886]: I0219 21:09:18.639881 4886 scope.go:117] "RemoveContainer" containerID="3464edb1ae3ca2be01c37ffa7dd5b104876610570320b94e212300c87c30c890" Feb 19 21:09:18 crc kubenswrapper[4886]: E0219 21:09:18.641696 4886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-pnltp_openshift-operators_6d5c4d2c-ca63-4cba-9ea5-fba7281716b4_0(1dee13f517c3af9db9915d38a0a15d2301bcece9b396f18444378e705bdc8f10): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 21:09:18 crc kubenswrapper[4886]: E0219 21:09:18.641748 4886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-pnltp_openshift-operators_6d5c4d2c-ca63-4cba-9ea5-fba7281716b4_0(1dee13f517c3af9db9915d38a0a15d2301bcece9b396f18444378e705bdc8f10): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" Feb 19 21:09:18 crc kubenswrapper[4886]: E0219 21:09:18.641774 4886 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-pnltp_openshift-operators_6d5c4d2c-ca63-4cba-9ea5-fba7281716b4_0(1dee13f517c3af9db9915d38a0a15d2301bcece9b396f18444378e705bdc8f10): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" Feb 19 21:09:18 crc kubenswrapper[4886]: E0219 21:09:18.641818 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-pnltp_openshift-operators(6d5c4d2c-ca63-4cba-9ea5-fba7281716b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-pnltp_openshift-operators(6d5c4d2c-ca63-4cba-9ea5-fba7281716b4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-pnltp_openshift-operators_6d5c4d2c-ca63-4cba-9ea5-fba7281716b4_0(1dee13f517c3af9db9915d38a0a15d2301bcece9b396f18444378e705bdc8f10): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" podUID="6d5c4d2c-ca63-4cba-9ea5-fba7281716b4" Feb 19 21:09:19 crc kubenswrapper[4886]: I0219 21:09:19.600319 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" Feb 19 21:09:19 crc kubenswrapper[4886]: I0219 21:09:19.601235 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" Feb 19 21:09:19 crc kubenswrapper[4886]: E0219 21:09:19.642732 4886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-hn4jw_openshift-operators_80fe938a-32f9-4742-ab1d-d1fafa082776_0(160d79716b97fb1ed7aa62fc0c0c4e187f376e2f32f9695459f712b9ecbfcfc6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 21:09:19 crc kubenswrapper[4886]: E0219 21:09:19.642812 4886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-hn4jw_openshift-operators_80fe938a-32f9-4742-ab1d-d1fafa082776_0(160d79716b97fb1ed7aa62fc0c0c4e187f376e2f32f9695459f712b9ecbfcfc6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" Feb 19 21:09:19 crc kubenswrapper[4886]: E0219 21:09:19.642850 4886 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-hn4jw_openshift-operators_80fe938a-32f9-4742-ab1d-d1fafa082776_0(160d79716b97fb1ed7aa62fc0c0c4e187f376e2f32f9695459f712b9ecbfcfc6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" Feb 19 21:09:19 crc kubenswrapper[4886]: E0219 21:09:19.642924 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-hn4jw_openshift-operators(80fe938a-32f9-4742-ab1d-d1fafa082776)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-hn4jw_openshift-operators(80fe938a-32f9-4742-ab1d-d1fafa082776)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-hn4jw_openshift-operators_80fe938a-32f9-4742-ab1d-d1fafa082776_0(160d79716b97fb1ed7aa62fc0c0c4e187f376e2f32f9695459f712b9ecbfcfc6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" podUID="80fe938a-32f9-4742-ab1d-d1fafa082776" Feb 19 21:09:19 crc kubenswrapper[4886]: I0219 21:09:19.660380 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerStarted","Data":"30855271763aa03f95502a62cdac34f3aaa5896ebbe6fec54e402ff34f490d71"} Feb 19 21:09:20 crc kubenswrapper[4886]: I0219 21:09:20.605349 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" Feb 19 21:09:20 crc kubenswrapper[4886]: I0219 21:09:20.606411 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" Feb 19 21:09:20 crc kubenswrapper[4886]: E0219 21:09:20.651723 4886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx_openshift-operators_487e178a-7f42-4686-a24a-1eb0459cdde3_0(b7db730fce6d5e693d84b2364190691537ff74bf45c57b229064a13d0b0b1e4e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 21:09:20 crc kubenswrapper[4886]: E0219 21:09:20.651819 4886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx_openshift-operators_487e178a-7f42-4686-a24a-1eb0459cdde3_0(b7db730fce6d5e693d84b2364190691537ff74bf45c57b229064a13d0b0b1e4e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" Feb 19 21:09:20 crc kubenswrapper[4886]: E0219 21:09:20.651872 4886 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx_openshift-operators_487e178a-7f42-4686-a24a-1eb0459cdde3_0(b7db730fce6d5e693d84b2364190691537ff74bf45c57b229064a13d0b0b1e4e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" Feb 19 21:09:20 crc kubenswrapper[4886]: E0219 21:09:20.651969 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx_openshift-operators(487e178a-7f42-4686-a24a-1eb0459cdde3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx_openshift-operators(487e178a-7f42-4686-a24a-1eb0459cdde3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx_openshift-operators_487e178a-7f42-4686-a24a-1eb0459cdde3_0(b7db730fce6d5e693d84b2364190691537ff74bf45c57b229064a13d0b0b1e4e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" podUID="487e178a-7f42-4686-a24a-1eb0459cdde3" Feb 19 21:09:25 crc kubenswrapper[4886]: I0219 21:09:25.600813 4886 scope.go:117] "RemoveContainer" containerID="b1d24b72a538ea2a24b25c0abe1e665851c85706dcd18e501c91a69c90bfa883" Feb 19 21:09:26 crc kubenswrapper[4886]: I0219 21:09:26.726516 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rnffz_83f8fca5-68c6-4300-b2d8-64a58bf92a64/kube-multus/2.log" Feb 19 21:09:26 crc kubenswrapper[4886]: I0219 21:09:26.726863 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rnffz" event={"ID":"83f8fca5-68c6-4300-b2d8-64a58bf92a64","Type":"ContainerStarted","Data":"ccc08d412fb7160629ca53d378ae2b51d9e095be7660dcd2d623b5068945c759"} Feb 19 21:09:29 crc kubenswrapper[4886]: I0219 21:09:29.292620 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5vhjh" Feb 19 21:09:30 crc kubenswrapper[4886]: I0219 21:09:30.600729 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" Feb 19 21:09:30 crc kubenswrapper[4886]: I0219 21:09:30.606455 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw" Feb 19 21:09:30 crc kubenswrapper[4886]: I0219 21:09:30.606485 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" Feb 19 21:09:30 crc kubenswrapper[4886]: I0219 21:09:30.607352 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" Feb 19 21:09:30 crc kubenswrapper[4886]: I0219 21:09:30.607500 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" Feb 19 21:09:30 crc kubenswrapper[4886]: I0219 21:09:30.607354 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw" Feb 19 21:09:31 crc kubenswrapper[4886]: I0219 21:09:31.175214 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-pnltp"] Feb 19 21:09:31 crc kubenswrapper[4886]: I0219 21:09:31.219064 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-hn4jw"] Feb 19 21:09:31 crc kubenswrapper[4886]: I0219 21:09:31.224146 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw"] Feb 19 21:09:31 crc kubenswrapper[4886]: W0219 21:09:31.232054 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30a3eec6_660b_4a66_a96b_a63626faf87e.slice/crio-fd3c76b894902c2676ccf4a0641f352ebb83049f2c98ae5aa95f763c167cddd9 WatchSource:0}: Error finding container fd3c76b894902c2676ccf4a0641f352ebb83049f2c98ae5aa95f763c167cddd9: Status 404 returned error can't find the container with id fd3c76b894902c2676ccf4a0641f352ebb83049f2c98ae5aa95f763c167cddd9 Feb 19 21:09:31 crc kubenswrapper[4886]: I0219 21:09:31.600423 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" Feb 19 21:09:31 crc kubenswrapper[4886]: I0219 21:09:31.600876 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" Feb 19 21:09:31 crc kubenswrapper[4886]: I0219 21:09:31.781054 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" event={"ID":"80fe938a-32f9-4742-ab1d-d1fafa082776","Type":"ContainerStarted","Data":"541135cb053147e3de243c98e9163560190fd5990153db66268d2da644338439"} Feb 19 21:09:31 crc kubenswrapper[4886]: I0219 21:09:31.783216 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" event={"ID":"6d5c4d2c-ca63-4cba-9ea5-fba7281716b4","Type":"ContainerStarted","Data":"30bcf6b33e1e99d271f03d97dbe61b291f0f6294a9b9a045d82506d333d88a0f"} Feb 19 21:09:31 crc kubenswrapper[4886]: I0219 21:09:31.784668 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw" event={"ID":"30a3eec6-660b-4a66-a96b-a63626faf87e","Type":"ContainerStarted","Data":"fd3c76b894902c2676ccf4a0641f352ebb83049f2c98ae5aa95f763c167cddd9"} Feb 19 21:09:31 crc kubenswrapper[4886]: I0219 21:09:31.943474 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx"] Feb 19 21:09:32 crc kubenswrapper[4886]: I0219 21:09:32.600612 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" Feb 19 21:09:32 crc kubenswrapper[4886]: I0219 21:09:32.601513 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" Feb 19 21:09:32 crc kubenswrapper[4886]: I0219 21:09:32.819942 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" event={"ID":"487e178a-7f42-4686-a24a-1eb0459cdde3","Type":"ContainerStarted","Data":"8585adaf5df2d53bb584ce6d89858d7afe74672fcb55a3e8162c3f79b2486c22"} Feb 19 21:09:32 crc kubenswrapper[4886]: I0219 21:09:32.932042 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf"] Feb 19 21:09:33 crc kubenswrapper[4886]: I0219 21:09:33.828615 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" event={"ID":"b502ab55-dc43-4a02-b47f-5fcd86f3a827","Type":"ContainerStarted","Data":"5e35e29be13831b13ede96eed8354878c4b2dbbbf2d8a58b48e0093440359435"} Feb 19 21:09:40 crc kubenswrapper[4886]: I0219 21:09:40.883946 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" event={"ID":"80fe938a-32f9-4742-ab1d-d1fafa082776","Type":"ContainerStarted","Data":"7798a477ce62997deed9131bbcf24560bcdb630872274774a7b27293103b42c7"} Feb 19 21:09:40 crc kubenswrapper[4886]: I0219 21:09:40.884533 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" Feb 19 21:09:40 crc kubenswrapper[4886]: I0219 21:09:40.886499 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" event={"ID":"6d5c4d2c-ca63-4cba-9ea5-fba7281716b4","Type":"ContainerStarted","Data":"5ac01b302ad8e9cf74c80f6f96cba3819bab1592838ed1be6bfda54eb26fcf1e"} Feb 19 21:09:40 crc kubenswrapper[4886]: I0219 21:09:40.886740 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" Feb 19 21:09:40 crc kubenswrapper[4886]: I0219 21:09:40.888322 4886 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-pnltp container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.26:8081/healthz\": dial tcp 10.217.0.26:8081: connect: connection refused" start-of-body= Feb 19 21:09:40 crc kubenswrapper[4886]: I0219 21:09:40.888412 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" podUID="6d5c4d2c-ca63-4cba-9ea5-fba7281716b4" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.26:8081/healthz\": dial tcp 10.217.0.26:8081: connect: connection refused" Feb 19 21:09:40 crc kubenswrapper[4886]: I0219 21:09:40.888716 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" event={"ID":"b502ab55-dc43-4a02-b47f-5fcd86f3a827","Type":"ContainerStarted","Data":"86c40359fe2001d8d4bdde41d591e468d0a99fddd7c2c144f1569fd7515e499b"} Feb 19 21:09:40 crc kubenswrapper[4886]: I0219 21:09:40.890896 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" event={"ID":"487e178a-7f42-4686-a24a-1eb0459cdde3","Type":"ContainerStarted","Data":"9ece84de5ef3f7f14f691ad8df678f91cf23c11d71522e7d91f983f5b0fbbbea"} Feb 19 21:09:40 crc kubenswrapper[4886]: I0219 21:09:40.893934 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw" event={"ID":"30a3eec6-660b-4a66-a96b-a63626faf87e","Type":"ContainerStarted","Data":"af33c281384d7906ed215eed2da1f14871c4f4ade9903df78649c546ec32e50a"} Feb 19 21:09:40 crc kubenswrapper[4886]: I0219 21:09:40.922476 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" podStartSLOduration=27.888680679 podStartE2EDuration="36.92246054s" podCreationTimestamp="2026-02-19 21:09:04 +0000 UTC" firstStartedPulling="2026-02-19 21:09:31.227349928 +0000 UTC m=+601.855193018" lastFinishedPulling="2026-02-19 21:09:40.261129829 +0000 UTC m=+610.888972879" observedRunningTime="2026-02-19 21:09:40.919843706 +0000 UTC m=+611.547686766" watchObservedRunningTime="2026-02-19 21:09:40.92246054 +0000 UTC m=+611.550303600" Feb 19 21:09:40 crc kubenswrapper[4886]: I0219 21:09:40.979759 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-fbljf" podStartSLOduration=29.665565523 podStartE2EDuration="36.979746499s" podCreationTimestamp="2026-02-19 21:09:04 +0000 UTC" firstStartedPulling="2026-02-19 21:09:32.953199367 +0000 UTC m=+603.581042457" lastFinishedPulling="2026-02-19 21:09:40.267380343 +0000 UTC m=+610.895223433" observedRunningTime="2026-02-19 21:09:40.978211082 +0000 UTC m=+611.606054132" watchObservedRunningTime="2026-02-19 21:09:40.979746499 +0000 UTC m=+611.607589549" Feb 19 21:09:40 crc kubenswrapper[4886]: I0219 21:09:40.981147 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7zllw" podStartSLOduration=27.954244992 podStartE2EDuration="36.981141274s" podCreationTimestamp="2026-02-19 21:09:04 +0000 UTC" firstStartedPulling="2026-02-19 21:09:31.234962175 +0000 UTC m=+601.862805235" lastFinishedPulling="2026-02-19 21:09:40.261858417 +0000 UTC m=+610.889701517" observedRunningTime="2026-02-19 21:09:40.950159201 +0000 UTC m=+611.578002291" watchObservedRunningTime="2026-02-19 21:09:40.981141274 +0000 UTC m=+611.608984324" Feb 19 21:09:41 crc kubenswrapper[4886]: I0219 21:09:41.004883 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d9c8dbfcf-gknfx" podStartSLOduration=28.724702606 podStartE2EDuration="37.004866517s" podCreationTimestamp="2026-02-19 21:09:04 +0000 UTC" firstStartedPulling="2026-02-19 21:09:31.955889532 +0000 UTC m=+602.583732622" lastFinishedPulling="2026-02-19 21:09:40.236053483 +0000 UTC m=+610.863896533" observedRunningTime="2026-02-19 21:09:41.000580672 +0000 UTC m=+611.628423732" watchObservedRunningTime="2026-02-19 21:09:41.004866517 +0000 UTC m=+611.632709567" Feb 19 21:09:41 crc kubenswrapper[4886]: I0219 21:09:41.033300 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" podStartSLOduration=27.875079133 podStartE2EDuration="37.033285436s" podCreationTimestamp="2026-02-19 21:09:04 +0000 UTC" firstStartedPulling="2026-02-19 21:09:31.191984678 +0000 UTC m=+601.819827738" lastFinishedPulling="2026-02-19 21:09:40.350190981 +0000 UTC m=+610.978034041" observedRunningTime="2026-02-19 21:09:41.029932574 +0000 UTC m=+611.657775624" watchObservedRunningTime="2026-02-19 21:09:41.033285436 +0000 UTC m=+611.661128476" Feb 19 21:09:41 crc kubenswrapper[4886]: I0219 21:09:41.905388 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.053758 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-6vmqp"] Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.055076 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6vmqp" Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.060225 4886 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-lsgnt" Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.062827 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.067449 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-bmn7f"] Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.068162 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-bmn7f" Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.069870 4886 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-cddlf" Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.075519 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.080909 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-gc6d9"] Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.082494 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-gc6d9" Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.088520 4886 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-ls7v6" Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.094055 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-bmn7f"] Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.120600 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-6vmqp"] Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.128214 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5lgx\" (UniqueName: \"kubernetes.io/projected/8ae55c25-e6f7-40ee-89e9-401d6ce1da0a-kube-api-access-d5lgx\") pod \"cert-manager-cainjector-cf98fcc89-6vmqp\" (UID: \"8ae55c25-e6f7-40ee-89e9-401d6ce1da0a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-6vmqp" Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.128316 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtkk2\" (UniqueName: \"kubernetes.io/projected/7474309f-a146-43e6-bd0d-03c678b50e92-kube-api-access-qtkk2\") pod \"cert-manager-webhook-687f57d79b-gc6d9\" (UID: \"7474309f-a146-43e6-bd0d-03c678b50e92\") " pod="cert-manager/cert-manager-webhook-687f57d79b-gc6d9" Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.128387 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cvhw\" (UniqueName: \"kubernetes.io/projected/a8396ff1-94f4-4d3b-8e82-e8a8d413fc52-kube-api-access-2cvhw\") pod \"cert-manager-858654f9db-bmn7f\" (UID: \"a8396ff1-94f4-4d3b-8e82-e8a8d413fc52\") " pod="cert-manager/cert-manager-858654f9db-bmn7f" Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.147661 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-gc6d9"] Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.229145 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cvhw\" (UniqueName: \"kubernetes.io/projected/a8396ff1-94f4-4d3b-8e82-e8a8d413fc52-kube-api-access-2cvhw\") pod \"cert-manager-858654f9db-bmn7f\" (UID: \"a8396ff1-94f4-4d3b-8e82-e8a8d413fc52\") " pod="cert-manager/cert-manager-858654f9db-bmn7f" Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.229214 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5lgx\" (UniqueName: \"kubernetes.io/projected/8ae55c25-e6f7-40ee-89e9-401d6ce1da0a-kube-api-access-d5lgx\") pod \"cert-manager-cainjector-cf98fcc89-6vmqp\" (UID: \"8ae55c25-e6f7-40ee-89e9-401d6ce1da0a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-6vmqp" Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.229272 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtkk2\" (UniqueName: \"kubernetes.io/projected/7474309f-a146-43e6-bd0d-03c678b50e92-kube-api-access-qtkk2\") pod \"cert-manager-webhook-687f57d79b-gc6d9\" (UID: \"7474309f-a146-43e6-bd0d-03c678b50e92\") " pod="cert-manager/cert-manager-webhook-687f57d79b-gc6d9" Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.249406 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cvhw\" (UniqueName: \"kubernetes.io/projected/a8396ff1-94f4-4d3b-8e82-e8a8d413fc52-kube-api-access-2cvhw\") pod \"cert-manager-858654f9db-bmn7f\" (UID: \"a8396ff1-94f4-4d3b-8e82-e8a8d413fc52\") " pod="cert-manager/cert-manager-858654f9db-bmn7f" Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.251198 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtkk2\" (UniqueName: \"kubernetes.io/projected/7474309f-a146-43e6-bd0d-03c678b50e92-kube-api-access-qtkk2\") pod \"cert-manager-webhook-687f57d79b-gc6d9\" (UID: \"7474309f-a146-43e6-bd0d-03c678b50e92\") " pod="cert-manager/cert-manager-webhook-687f57d79b-gc6d9" Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.255044 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5lgx\" (UniqueName: \"kubernetes.io/projected/8ae55c25-e6f7-40ee-89e9-401d6ce1da0a-kube-api-access-d5lgx\") pod \"cert-manager-cainjector-cf98fcc89-6vmqp\" (UID: \"8ae55c25-e6f7-40ee-89e9-401d6ce1da0a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-6vmqp" Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.378860 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6vmqp" Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.389732 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-bmn7f" Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.406707 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-gc6d9" Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.654291 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-6vmqp"] Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.708351 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-bmn7f"] Feb 19 21:09:50 crc kubenswrapper[4886]: W0219 21:09:50.719374 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8396ff1_94f4_4d3b_8e82_e8a8d413fc52.slice/crio-8f116f84dcb04d39379d593cc20d51c24e08721cfd7414db726a267de74cd16b WatchSource:0}: Error finding container 8f116f84dcb04d39379d593cc20d51c24e08721cfd7414db726a267de74cd16b: Status 404 returned error can't find the container with id 8f116f84dcb04d39379d593cc20d51c24e08721cfd7414db726a267de74cd16b Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.733186 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-gc6d9"] Feb 19 21:09:50 crc kubenswrapper[4886]: W0219 21:09:50.736378 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7474309f_a146_43e6_bd0d_03c678b50e92.slice/crio-2cd1e241fe30667ff2caf537de01f66d5440da60db521fe7f2fdabe942bb4ee9 WatchSource:0}: Error finding container 2cd1e241fe30667ff2caf537de01f66d5440da60db521fe7f2fdabe942bb4ee9: Status 404 returned error can't find the container with id 2cd1e241fe30667ff2caf537de01f66d5440da60db521fe7f2fdabe942bb4ee9 Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.975012 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6vmqp" event={"ID":"8ae55c25-e6f7-40ee-89e9-401d6ce1da0a","Type":"ContainerStarted","Data":"50820c2c1c802ae3f7e4b890c0c7a3fd688cc32bc07852a6a007c50d678cc878"} Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.977376 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-gc6d9" event={"ID":"7474309f-a146-43e6-bd0d-03c678b50e92","Type":"ContainerStarted","Data":"2cd1e241fe30667ff2caf537de01f66d5440da60db521fe7f2fdabe942bb4ee9"} Feb 19 21:09:50 crc kubenswrapper[4886]: I0219 21:09:50.978452 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-bmn7f" event={"ID":"a8396ff1-94f4-4d3b-8e82-e8a8d413fc52","Type":"ContainerStarted","Data":"8f116f84dcb04d39379d593cc20d51c24e08721cfd7414db726a267de74cd16b"} Feb 19 21:09:54 crc kubenswrapper[4886]: I0219 21:09:54.891862 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" Feb 19 21:09:56 crc kubenswrapper[4886]: I0219 21:09:56.016067 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-bmn7f" event={"ID":"a8396ff1-94f4-4d3b-8e82-e8a8d413fc52","Type":"ContainerStarted","Data":"28737acf0d85099a42ef49502161969fde030e55c0384233d174c7a908f96bd2"} Feb 19 21:09:56 crc kubenswrapper[4886]: I0219 21:09:56.018392 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6vmqp" event={"ID":"8ae55c25-e6f7-40ee-89e9-401d6ce1da0a","Type":"ContainerStarted","Data":"645acaddf029a6884083a1457d83e70d191e3ed46f58876b8fca2bf45ed0e5d6"} Feb 19 21:09:56 crc kubenswrapper[4886]: I0219 21:09:56.020190 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-gc6d9" event={"ID":"7474309f-a146-43e6-bd0d-03c678b50e92","Type":"ContainerStarted","Data":"01625917dc973041136d7f6278e033da36c0b85afab559ebb946ec384f70f879"} Feb 19 21:09:56 crc kubenswrapper[4886]: I0219 21:09:56.020413 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-gc6d9" Feb 19 21:09:56 crc kubenswrapper[4886]: I0219 21:09:56.037987 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-bmn7f" podStartSLOduration=1.606822111 podStartE2EDuration="6.037971887s" podCreationTimestamp="2026-02-19 21:09:50 +0000 UTC" firstStartedPulling="2026-02-19 21:09:50.728371499 +0000 UTC m=+621.356214569" lastFinishedPulling="2026-02-19 21:09:55.159521295 +0000 UTC m=+625.787364345" observedRunningTime="2026-02-19 21:09:56.03607548 +0000 UTC m=+626.663918530" watchObservedRunningTime="2026-02-19 21:09:56.037971887 +0000 UTC m=+626.665814927" Feb 19 21:09:56 crc kubenswrapper[4886]: I0219 21:09:56.055237 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-gc6d9" podStartSLOduration=1.640460639 podStartE2EDuration="6.055206201s" podCreationTimestamp="2026-02-19 21:09:50 +0000 UTC" firstStartedPulling="2026-02-19 21:09:50.73938581 +0000 UTC m=+621.367228860" lastFinishedPulling="2026-02-19 21:09:55.154131372 +0000 UTC m=+625.781974422" observedRunningTime="2026-02-19 21:09:56.052164066 +0000 UTC m=+626.680007106" watchObservedRunningTime="2026-02-19 21:09:56.055206201 +0000 UTC m=+626.683049251" Feb 19 21:09:56 crc kubenswrapper[4886]: I0219 21:09:56.089463 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6vmqp" podStartSLOduration=1.548404203 podStartE2EDuration="6.089447233s" podCreationTimestamp="2026-02-19 21:09:50 +0000 UTC" firstStartedPulling="2026-02-19 21:09:50.663635496 +0000 UTC m=+621.291478546" lastFinishedPulling="2026-02-19 21:09:55.204678526 +0000 UTC m=+625.832521576" observedRunningTime="2026-02-19 21:09:56.085145178 +0000 UTC m=+626.712988228" watchObservedRunningTime="2026-02-19 21:09:56.089447233 +0000 UTC m=+626.717290283" Feb 19 21:10:00 crc kubenswrapper[4886]: I0219 21:10:00.411911 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-gc6d9" Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.053741 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj"] Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.055889 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj" Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.063465 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.078476 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj"] Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.121307 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8699a699-3b0a-4b36-8c1c-eefb09c102c2-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj\" (UID: \"8699a699-3b0a-4b36-8c1c-eefb09c102c2\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj" Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.121432 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8699a699-3b0a-4b36-8c1c-eefb09c102c2-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj\" (UID: \"8699a699-3b0a-4b36-8c1c-eefb09c102c2\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj" Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.121487 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ffk9\" (UniqueName: \"kubernetes.io/projected/8699a699-3b0a-4b36-8c1c-eefb09c102c2-kube-api-access-9ffk9\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj\" (UID: \"8699a699-3b0a-4b36-8c1c-eefb09c102c2\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj" Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.222557 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8699a699-3b0a-4b36-8c1c-eefb09c102c2-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj\" (UID: \"8699a699-3b0a-4b36-8c1c-eefb09c102c2\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj" Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.222662 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8699a699-3b0a-4b36-8c1c-eefb09c102c2-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj\" (UID: \"8699a699-3b0a-4b36-8c1c-eefb09c102c2\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj" Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.222720 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ffk9\" (UniqueName: \"kubernetes.io/projected/8699a699-3b0a-4b36-8c1c-eefb09c102c2-kube-api-access-9ffk9\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj\" (UID: \"8699a699-3b0a-4b36-8c1c-eefb09c102c2\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj" Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.223587 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8699a699-3b0a-4b36-8c1c-eefb09c102c2-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj\" (UID: \"8699a699-3b0a-4b36-8c1c-eefb09c102c2\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj" Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.223597 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8699a699-3b0a-4b36-8c1c-eefb09c102c2-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj\" (UID: \"8699a699-3b0a-4b36-8c1c-eefb09c102c2\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj" Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.233619 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5"] Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.235255 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5" Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.251041 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5"] Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.260357 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ffk9\" (UniqueName: \"kubernetes.io/projected/8699a699-3b0a-4b36-8c1c-eefb09c102c2-kube-api-access-9ffk9\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj\" (UID: \"8699a699-3b0a-4b36-8c1c-eefb09c102c2\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj" Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.324117 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88l2n\" (UniqueName: \"kubernetes.io/projected/c26e9b9d-214b-4142-9283-1e30bb501d1a-kube-api-access-88l2n\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5\" (UID: \"c26e9b9d-214b-4142-9283-1e30bb501d1a\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5" Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.324782 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c26e9b9d-214b-4142-9283-1e30bb501d1a-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5\" (UID: \"c26e9b9d-214b-4142-9283-1e30bb501d1a\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5" Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.324925 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c26e9b9d-214b-4142-9283-1e30bb501d1a-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5\" (UID: \"c26e9b9d-214b-4142-9283-1e30bb501d1a\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5" Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.375383 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj" Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.429526 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88l2n\" (UniqueName: \"kubernetes.io/projected/c26e9b9d-214b-4142-9283-1e30bb501d1a-kube-api-access-88l2n\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5\" (UID: \"c26e9b9d-214b-4142-9283-1e30bb501d1a\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5" Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.429585 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c26e9b9d-214b-4142-9283-1e30bb501d1a-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5\" (UID: \"c26e9b9d-214b-4142-9283-1e30bb501d1a\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5" Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.429620 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c26e9b9d-214b-4142-9283-1e30bb501d1a-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5\" (UID: \"c26e9b9d-214b-4142-9283-1e30bb501d1a\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5" Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.430239 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c26e9b9d-214b-4142-9283-1e30bb501d1a-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5\" (UID: \"c26e9b9d-214b-4142-9283-1e30bb501d1a\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5" Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.430487 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c26e9b9d-214b-4142-9283-1e30bb501d1a-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5\" (UID: \"c26e9b9d-214b-4142-9283-1e30bb501d1a\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5" Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.456239 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88l2n\" (UniqueName: \"kubernetes.io/projected/c26e9b9d-214b-4142-9283-1e30bb501d1a-kube-api-access-88l2n\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5\" (UID: \"c26e9b9d-214b-4142-9283-1e30bb501d1a\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5" Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.586098 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5" Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.638085 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj"] Feb 19 21:10:22 crc kubenswrapper[4886]: I0219 21:10:22.811331 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5"] Feb 19 21:10:22 crc kubenswrapper[4886]: W0219 21:10:22.829184 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc26e9b9d_214b_4142_9283_1e30bb501d1a.slice/crio-2230747529c159ded2a701043d9f79fbe8b5b430967e01de100bfffde3656a85 WatchSource:0}: Error finding container 2230747529c159ded2a701043d9f79fbe8b5b430967e01de100bfffde3656a85: Status 404 returned error can't find the container with id 2230747529c159ded2a701043d9f79fbe8b5b430967e01de100bfffde3656a85 Feb 19 21:10:23 crc kubenswrapper[4886]: I0219 21:10:23.224180 4886 generic.go:334] "Generic (PLEG): container finished" podID="c26e9b9d-214b-4142-9283-1e30bb501d1a" containerID="9108e3c5a23eed5b998e3ce5bdc7c8e6b38364081408848232c0c59b08598cbc" exitCode=0 Feb 19 21:10:23 crc kubenswrapper[4886]: I0219 21:10:23.224298 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5" event={"ID":"c26e9b9d-214b-4142-9283-1e30bb501d1a","Type":"ContainerDied","Data":"9108e3c5a23eed5b998e3ce5bdc7c8e6b38364081408848232c0c59b08598cbc"} Feb 19 21:10:23 crc kubenswrapper[4886]: I0219 21:10:23.224359 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5" event={"ID":"c26e9b9d-214b-4142-9283-1e30bb501d1a","Type":"ContainerStarted","Data":"2230747529c159ded2a701043d9f79fbe8b5b430967e01de100bfffde3656a85"} Feb 19 21:10:23 crc kubenswrapper[4886]: I0219 21:10:23.228011 4886 generic.go:334] "Generic (PLEG): container finished" podID="8699a699-3b0a-4b36-8c1c-eefb09c102c2" containerID="9533845a5a02b5e3a44ef4e248e575175e87deb625ff80a776e11dd454d082cd" exitCode=0 Feb 19 21:10:23 crc kubenswrapper[4886]: I0219 21:10:23.228043 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj" event={"ID":"8699a699-3b0a-4b36-8c1c-eefb09c102c2","Type":"ContainerDied","Data":"9533845a5a02b5e3a44ef4e248e575175e87deb625ff80a776e11dd454d082cd"} Feb 19 21:10:23 crc kubenswrapper[4886]: I0219 21:10:23.228066 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj" event={"ID":"8699a699-3b0a-4b36-8c1c-eefb09c102c2","Type":"ContainerStarted","Data":"08dcbc0b822f34483b8610e277ac689b86accb1724004e84025ff223cfd8638c"} Feb 19 21:10:25 crc kubenswrapper[4886]: I0219 21:10:25.246783 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5" event={"ID":"c26e9b9d-214b-4142-9283-1e30bb501d1a","Type":"ContainerStarted","Data":"dca22f4d7e402400df2f01c61abc41f0cafa620dbc3d6be32c8fe0653d3d5880"} Feb 19 21:10:25 crc kubenswrapper[4886]: I0219 21:10:25.248601 4886 generic.go:334] "Generic (PLEG): container finished" podID="8699a699-3b0a-4b36-8c1c-eefb09c102c2" containerID="db5c6d5dfe4c2757cde05baec7d6b0416b160693bd47d8b36147289ddff79d93" exitCode=0 Feb 19 21:10:25 crc kubenswrapper[4886]: I0219 21:10:25.248637 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj" event={"ID":"8699a699-3b0a-4b36-8c1c-eefb09c102c2","Type":"ContainerDied","Data":"db5c6d5dfe4c2757cde05baec7d6b0416b160693bd47d8b36147289ddff79d93"} Feb 19 21:10:26 crc kubenswrapper[4886]: I0219 21:10:26.258702 4886 generic.go:334] "Generic (PLEG): container finished" podID="c26e9b9d-214b-4142-9283-1e30bb501d1a" containerID="dca22f4d7e402400df2f01c61abc41f0cafa620dbc3d6be32c8fe0653d3d5880" exitCode=0 Feb 19 21:10:26 crc kubenswrapper[4886]: I0219 21:10:26.258802 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5" event={"ID":"c26e9b9d-214b-4142-9283-1e30bb501d1a","Type":"ContainerDied","Data":"dca22f4d7e402400df2f01c61abc41f0cafa620dbc3d6be32c8fe0653d3d5880"} Feb 19 21:10:26 crc kubenswrapper[4886]: I0219 21:10:26.262511 4886 generic.go:334] "Generic (PLEG): container finished" podID="8699a699-3b0a-4b36-8c1c-eefb09c102c2" containerID="62a67993d5047a792d282ad714cccbe3daea2b70fee8e5cf8e3ceb7f1f1fa6c3" exitCode=0 Feb 19 21:10:26 crc kubenswrapper[4886]: I0219 21:10:26.262539 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj" event={"ID":"8699a699-3b0a-4b36-8c1c-eefb09c102c2","Type":"ContainerDied","Data":"62a67993d5047a792d282ad714cccbe3daea2b70fee8e5cf8e3ceb7f1f1fa6c3"} Feb 19 21:10:27 crc kubenswrapper[4886]: I0219 21:10:27.270775 4886 generic.go:334] "Generic (PLEG): container finished" podID="c26e9b9d-214b-4142-9283-1e30bb501d1a" containerID="1597c28a65bc14376cb4fb59a63ab36f6541990c8bd8c43a110ea52c425cc7cf" exitCode=0 Feb 19 21:10:27 crc kubenswrapper[4886]: I0219 21:10:27.270859 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5" event={"ID":"c26e9b9d-214b-4142-9283-1e30bb501d1a","Type":"ContainerDied","Data":"1597c28a65bc14376cb4fb59a63ab36f6541990c8bd8c43a110ea52c425cc7cf"} Feb 19 21:10:27 crc kubenswrapper[4886]: I0219 21:10:27.517671 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj" Feb 19 21:10:27 crc kubenswrapper[4886]: I0219 21:10:27.610768 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8699a699-3b0a-4b36-8c1c-eefb09c102c2-bundle\") pod \"8699a699-3b0a-4b36-8c1c-eefb09c102c2\" (UID: \"8699a699-3b0a-4b36-8c1c-eefb09c102c2\") " Feb 19 21:10:27 crc kubenswrapper[4886]: I0219 21:10:27.611081 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ffk9\" (UniqueName: \"kubernetes.io/projected/8699a699-3b0a-4b36-8c1c-eefb09c102c2-kube-api-access-9ffk9\") pod \"8699a699-3b0a-4b36-8c1c-eefb09c102c2\" (UID: \"8699a699-3b0a-4b36-8c1c-eefb09c102c2\") " Feb 19 21:10:27 crc kubenswrapper[4886]: I0219 21:10:27.611488 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8699a699-3b0a-4b36-8c1c-eefb09c102c2-util\") pod \"8699a699-3b0a-4b36-8c1c-eefb09c102c2\" (UID: \"8699a699-3b0a-4b36-8c1c-eefb09c102c2\") " Feb 19 21:10:27 crc kubenswrapper[4886]: I0219 21:10:27.612873 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8699a699-3b0a-4b36-8c1c-eefb09c102c2-bundle" (OuterVolumeSpecName: "bundle") pod "8699a699-3b0a-4b36-8c1c-eefb09c102c2" (UID: "8699a699-3b0a-4b36-8c1c-eefb09c102c2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:10:27 crc kubenswrapper[4886]: I0219 21:10:27.621744 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8699a699-3b0a-4b36-8c1c-eefb09c102c2-kube-api-access-9ffk9" (OuterVolumeSpecName: "kube-api-access-9ffk9") pod "8699a699-3b0a-4b36-8c1c-eefb09c102c2" (UID: "8699a699-3b0a-4b36-8c1c-eefb09c102c2"). InnerVolumeSpecName "kube-api-access-9ffk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:10:27 crc kubenswrapper[4886]: I0219 21:10:27.631869 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8699a699-3b0a-4b36-8c1c-eefb09c102c2-util" (OuterVolumeSpecName: "util") pod "8699a699-3b0a-4b36-8c1c-eefb09c102c2" (UID: "8699a699-3b0a-4b36-8c1c-eefb09c102c2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:10:27 crc kubenswrapper[4886]: I0219 21:10:27.715895 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ffk9\" (UniqueName: \"kubernetes.io/projected/8699a699-3b0a-4b36-8c1c-eefb09c102c2-kube-api-access-9ffk9\") on node \"crc\" DevicePath \"\"" Feb 19 21:10:27 crc kubenswrapper[4886]: I0219 21:10:27.716622 4886 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8699a699-3b0a-4b36-8c1c-eefb09c102c2-util\") on node \"crc\" DevicePath \"\"" Feb 19 21:10:27 crc kubenswrapper[4886]: I0219 21:10:27.716714 4886 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8699a699-3b0a-4b36-8c1c-eefb09c102c2-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:10:28 crc kubenswrapper[4886]: I0219 21:10:28.283703 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj" event={"ID":"8699a699-3b0a-4b36-8c1c-eefb09c102c2","Type":"ContainerDied","Data":"08dcbc0b822f34483b8610e277ac689b86accb1724004e84025ff223cfd8638c"} Feb 19 21:10:28 crc kubenswrapper[4886]: I0219 21:10:28.284625 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08dcbc0b822f34483b8610e277ac689b86accb1724004e84025ff223cfd8638c" Feb 19 21:10:28 crc kubenswrapper[4886]: I0219 21:10:28.283753 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194rmnj" Feb 19 21:10:28 crc kubenswrapper[4886]: I0219 21:10:28.619000 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5" Feb 19 21:10:28 crc kubenswrapper[4886]: I0219 21:10:28.731821 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88l2n\" (UniqueName: \"kubernetes.io/projected/c26e9b9d-214b-4142-9283-1e30bb501d1a-kube-api-access-88l2n\") pod \"c26e9b9d-214b-4142-9283-1e30bb501d1a\" (UID: \"c26e9b9d-214b-4142-9283-1e30bb501d1a\") " Feb 19 21:10:28 crc kubenswrapper[4886]: I0219 21:10:28.733245 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c26e9b9d-214b-4142-9283-1e30bb501d1a-bundle" (OuterVolumeSpecName: "bundle") pod "c26e9b9d-214b-4142-9283-1e30bb501d1a" (UID: "c26e9b9d-214b-4142-9283-1e30bb501d1a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:10:28 crc kubenswrapper[4886]: I0219 21:10:28.732062 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c26e9b9d-214b-4142-9283-1e30bb501d1a-bundle\") pod \"c26e9b9d-214b-4142-9283-1e30bb501d1a\" (UID: \"c26e9b9d-214b-4142-9283-1e30bb501d1a\") " Feb 19 21:10:28 crc kubenswrapper[4886]: I0219 21:10:28.733429 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c26e9b9d-214b-4142-9283-1e30bb501d1a-util\") pod \"c26e9b9d-214b-4142-9283-1e30bb501d1a\" (UID: \"c26e9b9d-214b-4142-9283-1e30bb501d1a\") " Feb 19 21:10:28 crc kubenswrapper[4886]: I0219 21:10:28.736105 4886 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c26e9b9d-214b-4142-9283-1e30bb501d1a-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:10:28 crc kubenswrapper[4886]: I0219 21:10:28.738077 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c26e9b9d-214b-4142-9283-1e30bb501d1a-kube-api-access-88l2n" (OuterVolumeSpecName: "kube-api-access-88l2n") pod "c26e9b9d-214b-4142-9283-1e30bb501d1a" (UID: "c26e9b9d-214b-4142-9283-1e30bb501d1a"). InnerVolumeSpecName "kube-api-access-88l2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:10:28 crc kubenswrapper[4886]: I0219 21:10:28.837415 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88l2n\" (UniqueName: \"kubernetes.io/projected/c26e9b9d-214b-4142-9283-1e30bb501d1a-kube-api-access-88l2n\") on node \"crc\" DevicePath \"\"" Feb 19 21:10:29 crc kubenswrapper[4886]: I0219 21:10:29.262733 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c26e9b9d-214b-4142-9283-1e30bb501d1a-util" (OuterVolumeSpecName: "util") pod "c26e9b9d-214b-4142-9283-1e30bb501d1a" (UID: "c26e9b9d-214b-4142-9283-1e30bb501d1a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:10:29 crc kubenswrapper[4886]: I0219 21:10:29.302332 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5" Feb 19 21:10:29 crc kubenswrapper[4886]: I0219 21:10:29.302405 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rrbm5" event={"ID":"c26e9b9d-214b-4142-9283-1e30bb501d1a","Type":"ContainerDied","Data":"2230747529c159ded2a701043d9f79fbe8b5b430967e01de100bfffde3656a85"} Feb 19 21:10:29 crc kubenswrapper[4886]: I0219 21:10:29.302609 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2230747529c159ded2a701043d9f79fbe8b5b430967e01de100bfffde3656a85" Feb 19 21:10:29 crc kubenswrapper[4886]: I0219 21:10:29.346789 4886 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c26e9b9d-214b-4142-9283-1e30bb501d1a-util\") on node \"crc\" DevicePath \"\"" Feb 19 21:10:35 crc kubenswrapper[4886]: I0219 21:10:35.434978 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-n7hb7"] Feb 19 21:10:35 crc kubenswrapper[4886]: E0219 21:10:35.435710 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c26e9b9d-214b-4142-9283-1e30bb501d1a" containerName="pull" Feb 19 21:10:35 crc kubenswrapper[4886]: I0219 21:10:35.435725 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c26e9b9d-214b-4142-9283-1e30bb501d1a" containerName="pull" Feb 19 21:10:35 crc kubenswrapper[4886]: E0219 21:10:35.435740 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8699a699-3b0a-4b36-8c1c-eefb09c102c2" containerName="pull" Feb 19 21:10:35 crc kubenswrapper[4886]: I0219 21:10:35.435747 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8699a699-3b0a-4b36-8c1c-eefb09c102c2" containerName="pull" Feb 19 21:10:35 crc kubenswrapper[4886]: E0219 21:10:35.435761 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8699a699-3b0a-4b36-8c1c-eefb09c102c2" containerName="util" Feb 19 21:10:35 crc kubenswrapper[4886]: I0219 21:10:35.435769 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8699a699-3b0a-4b36-8c1c-eefb09c102c2" containerName="util" Feb 19 21:10:35 crc kubenswrapper[4886]: E0219 21:10:35.435784 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8699a699-3b0a-4b36-8c1c-eefb09c102c2" containerName="extract" Feb 19 21:10:35 crc kubenswrapper[4886]: I0219 21:10:35.435791 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8699a699-3b0a-4b36-8c1c-eefb09c102c2" containerName="extract" Feb 19 21:10:35 crc kubenswrapper[4886]: E0219 21:10:35.435804 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c26e9b9d-214b-4142-9283-1e30bb501d1a" containerName="util" Feb 19 21:10:35 crc kubenswrapper[4886]: I0219 21:10:35.435811 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c26e9b9d-214b-4142-9283-1e30bb501d1a" containerName="util" Feb 19 21:10:35 crc kubenswrapper[4886]: E0219 21:10:35.435824 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c26e9b9d-214b-4142-9283-1e30bb501d1a" containerName="extract" Feb 19 21:10:35 crc kubenswrapper[4886]: I0219 21:10:35.435832 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c26e9b9d-214b-4142-9283-1e30bb501d1a" containerName="extract" Feb 19 21:10:35 crc kubenswrapper[4886]: I0219 21:10:35.435977 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="8699a699-3b0a-4b36-8c1c-eefb09c102c2" containerName="extract" Feb 19 21:10:35 crc kubenswrapper[4886]: I0219 21:10:35.435996 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c26e9b9d-214b-4142-9283-1e30bb501d1a" containerName="extract" Feb 19 21:10:35 crc kubenswrapper[4886]: I0219 21:10:35.436526 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-n7hb7" Feb 19 21:10:35 crc kubenswrapper[4886]: I0219 21:10:35.438951 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-npx4d" Feb 19 21:10:35 crc kubenswrapper[4886]: I0219 21:10:35.439216 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Feb 19 21:10:35 crc kubenswrapper[4886]: I0219 21:10:35.443766 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Feb 19 21:10:35 crc kubenswrapper[4886]: I0219 21:10:35.451761 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-n7hb7"] Feb 19 21:10:35 crc kubenswrapper[4886]: I0219 21:10:35.540421 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrgkg\" (UniqueName: \"kubernetes.io/projected/64a3b7f2-5132-4771-a503-f2557820f357-kube-api-access-zrgkg\") pod \"cluster-logging-operator-c769fd969-n7hb7\" (UID: \"64a3b7f2-5132-4771-a503-f2557820f357\") " pod="openshift-logging/cluster-logging-operator-c769fd969-n7hb7" Feb 19 21:10:35 crc kubenswrapper[4886]: I0219 21:10:35.641416 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrgkg\" (UniqueName: \"kubernetes.io/projected/64a3b7f2-5132-4771-a503-f2557820f357-kube-api-access-zrgkg\") pod \"cluster-logging-operator-c769fd969-n7hb7\" (UID: \"64a3b7f2-5132-4771-a503-f2557820f357\") " pod="openshift-logging/cluster-logging-operator-c769fd969-n7hb7" Feb 19 21:10:35 crc kubenswrapper[4886]: I0219 21:10:35.666637 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrgkg\" (UniqueName: \"kubernetes.io/projected/64a3b7f2-5132-4771-a503-f2557820f357-kube-api-access-zrgkg\") pod \"cluster-logging-operator-c769fd969-n7hb7\" (UID: \"64a3b7f2-5132-4771-a503-f2557820f357\") " pod="openshift-logging/cluster-logging-operator-c769fd969-n7hb7" Feb 19 21:10:35 crc kubenswrapper[4886]: I0219 21:10:35.750651 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-n7hb7" Feb 19 21:10:36 crc kubenswrapper[4886]: I0219 21:10:36.207321 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-n7hb7"] Feb 19 21:10:36 crc kubenswrapper[4886]: I0219 21:10:36.357798 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-n7hb7" event={"ID":"64a3b7f2-5132-4771-a503-f2557820f357","Type":"ContainerStarted","Data":"0dd965718c9c96a587c78ce3cc9b41fb2bc5f497885fe6e4d43fe37a6c7c7981"} Feb 19 21:10:42 crc kubenswrapper[4886]: I0219 21:10:42.401311 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-n7hb7" event={"ID":"64a3b7f2-5132-4771-a503-f2557820f357","Type":"ContainerStarted","Data":"23a177ad4366aa2e1fe8716638f2e2b8df6410811fb9b7d15803dea41ecab2a5"} Feb 19 21:10:42 crc kubenswrapper[4886]: I0219 21:10:42.428908 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-n7hb7" podStartSLOduration=2.071335356 podStartE2EDuration="7.428885234s" podCreationTimestamp="2026-02-19 21:10:35 +0000 UTC" firstStartedPulling="2026-02-19 21:10:36.21709686 +0000 UTC m=+666.844939920" lastFinishedPulling="2026-02-19 21:10:41.574646748 +0000 UTC m=+672.202489798" observedRunningTime="2026-02-19 21:10:42.425543152 +0000 UTC m=+673.053386202" watchObservedRunningTime="2026-02-19 21:10:42.428885234 +0000 UTC m=+673.056728304" Feb 19 21:10:44 crc kubenswrapper[4886]: I0219 21:10:44.911736 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f"] Feb 19 21:10:44 crc kubenswrapper[4886]: I0219 21:10:44.913209 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" Feb 19 21:10:44 crc kubenswrapper[4886]: I0219 21:10:44.915190 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 19 21:10:44 crc kubenswrapper[4886]: I0219 21:10:44.915384 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 19 21:10:44 crc kubenswrapper[4886]: I0219 21:10:44.915811 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 19 21:10:44 crc kubenswrapper[4886]: I0219 21:10:44.915845 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 19 21:10:44 crc kubenswrapper[4886]: I0219 21:10:44.916098 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-vv4jv" Feb 19 21:10:44 crc kubenswrapper[4886]: I0219 21:10:44.916304 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 19 21:10:44 crc kubenswrapper[4886]: I0219 21:10:44.938728 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f"] Feb 19 21:10:44 crc kubenswrapper[4886]: I0219 21:10:44.975512 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ae9d788-4b23-480d-be58-dedda686c24d-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-b77f6dcd-4z22f\" (UID: \"9ae9d788-4b23-480d-be58-dedda686c24d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" Feb 19 21:10:44 crc kubenswrapper[4886]: I0219 21:10:44.975558 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9ae9d788-4b23-480d-be58-dedda686c24d-webhook-cert\") pod \"loki-operator-controller-manager-b77f6dcd-4z22f\" (UID: \"9ae9d788-4b23-480d-be58-dedda686c24d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" Feb 19 21:10:44 crc kubenswrapper[4886]: I0219 21:10:44.975587 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svwbd\" (UniqueName: \"kubernetes.io/projected/9ae9d788-4b23-480d-be58-dedda686c24d-kube-api-access-svwbd\") pod \"loki-operator-controller-manager-b77f6dcd-4z22f\" (UID: \"9ae9d788-4b23-480d-be58-dedda686c24d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" Feb 19 21:10:44 crc kubenswrapper[4886]: I0219 21:10:44.975610 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9ae9d788-4b23-480d-be58-dedda686c24d-apiservice-cert\") pod \"loki-operator-controller-manager-b77f6dcd-4z22f\" (UID: \"9ae9d788-4b23-480d-be58-dedda686c24d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" Feb 19 21:10:44 crc kubenswrapper[4886]: I0219 21:10:44.975661 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/9ae9d788-4b23-480d-be58-dedda686c24d-manager-config\") pod \"loki-operator-controller-manager-b77f6dcd-4z22f\" (UID: \"9ae9d788-4b23-480d-be58-dedda686c24d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" Feb 19 21:10:45 crc kubenswrapper[4886]: I0219 21:10:45.077239 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svwbd\" (UniqueName: \"kubernetes.io/projected/9ae9d788-4b23-480d-be58-dedda686c24d-kube-api-access-svwbd\") pod \"loki-operator-controller-manager-b77f6dcd-4z22f\" (UID: \"9ae9d788-4b23-480d-be58-dedda686c24d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" Feb 19 21:10:45 crc kubenswrapper[4886]: I0219 21:10:45.077311 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9ae9d788-4b23-480d-be58-dedda686c24d-apiservice-cert\") pod \"loki-operator-controller-manager-b77f6dcd-4z22f\" (UID: \"9ae9d788-4b23-480d-be58-dedda686c24d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" Feb 19 21:10:45 crc kubenswrapper[4886]: I0219 21:10:45.077360 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/9ae9d788-4b23-480d-be58-dedda686c24d-manager-config\") pod \"loki-operator-controller-manager-b77f6dcd-4z22f\" (UID: \"9ae9d788-4b23-480d-be58-dedda686c24d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" Feb 19 21:10:45 crc kubenswrapper[4886]: I0219 21:10:45.077409 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ae9d788-4b23-480d-be58-dedda686c24d-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-b77f6dcd-4z22f\" (UID: \"9ae9d788-4b23-480d-be58-dedda686c24d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" Feb 19 21:10:45 crc kubenswrapper[4886]: I0219 21:10:45.077428 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9ae9d788-4b23-480d-be58-dedda686c24d-webhook-cert\") pod \"loki-operator-controller-manager-b77f6dcd-4z22f\" (UID: \"9ae9d788-4b23-480d-be58-dedda686c24d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" Feb 19 21:10:45 crc kubenswrapper[4886]: I0219 21:10:45.079375 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/9ae9d788-4b23-480d-be58-dedda686c24d-manager-config\") pod \"loki-operator-controller-manager-b77f6dcd-4z22f\" (UID: \"9ae9d788-4b23-480d-be58-dedda686c24d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" Feb 19 21:10:45 crc kubenswrapper[4886]: I0219 21:10:45.083337 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9ae9d788-4b23-480d-be58-dedda686c24d-webhook-cert\") pod \"loki-operator-controller-manager-b77f6dcd-4z22f\" (UID: \"9ae9d788-4b23-480d-be58-dedda686c24d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" Feb 19 21:10:45 crc kubenswrapper[4886]: I0219 21:10:45.084000 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9ae9d788-4b23-480d-be58-dedda686c24d-apiservice-cert\") pod \"loki-operator-controller-manager-b77f6dcd-4z22f\" (UID: \"9ae9d788-4b23-480d-be58-dedda686c24d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" Feb 19 21:10:45 crc kubenswrapper[4886]: I0219 21:10:45.087125 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ae9d788-4b23-480d-be58-dedda686c24d-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-b77f6dcd-4z22f\" (UID: \"9ae9d788-4b23-480d-be58-dedda686c24d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" Feb 19 21:10:45 crc kubenswrapper[4886]: I0219 21:10:45.111031 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svwbd\" (UniqueName: \"kubernetes.io/projected/9ae9d788-4b23-480d-be58-dedda686c24d-kube-api-access-svwbd\") pod \"loki-operator-controller-manager-b77f6dcd-4z22f\" (UID: \"9ae9d788-4b23-480d-be58-dedda686c24d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" Feb 19 21:10:45 crc kubenswrapper[4886]: I0219 21:10:45.234033 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" Feb 19 21:10:45 crc kubenswrapper[4886]: I0219 21:10:45.643288 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f"] Feb 19 21:10:46 crc kubenswrapper[4886]: I0219 21:10:46.430082 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" event={"ID":"9ae9d788-4b23-480d-be58-dedda686c24d","Type":"ContainerStarted","Data":"717ed6da04741153e172962f6bed36767f31f25e5e6ddda351548c14466753b4"} Feb 19 21:10:49 crc kubenswrapper[4886]: I0219 21:10:49.454791 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" event={"ID":"9ae9d788-4b23-480d-be58-dedda686c24d","Type":"ContainerStarted","Data":"d2ede7a0f5aa4b85a5ecc5fe01052d03b307b76b85ebc0b50d8bb0736d1ed81f"} Feb 19 21:10:55 crc kubenswrapper[4886]: I0219 21:10:55.532218 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" event={"ID":"9ae9d788-4b23-480d-be58-dedda686c24d","Type":"ContainerStarted","Data":"f06a9b3a51f67fe3f4b54f4a50f01fb3724c0b5182102823037c065876926053"} Feb 19 21:10:55 crc kubenswrapper[4886]: I0219 21:10:55.532657 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" Feb 19 21:10:55 crc kubenswrapper[4886]: I0219 21:10:55.535461 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" Feb 19 21:10:55 crc kubenswrapper[4886]: I0219 21:10:55.564563 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" podStartSLOduration=1.9300109600000002 podStartE2EDuration="11.564536842s" podCreationTimestamp="2026-02-19 21:10:44 +0000 UTC" firstStartedPulling="2026-02-19 21:10:45.649184341 +0000 UTC m=+676.277027391" lastFinishedPulling="2026-02-19 21:10:55.283710223 +0000 UTC m=+685.911553273" observedRunningTime="2026-02-19 21:10:55.556838653 +0000 UTC m=+686.184681703" watchObservedRunningTime="2026-02-19 21:10:55.564536842 +0000 UTC m=+686.192379942" Feb 19 21:11:01 crc kubenswrapper[4886]: I0219 21:11:01.899061 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 19 21:11:01 crc kubenswrapper[4886]: I0219 21:11:01.901705 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 19 21:11:01 crc kubenswrapper[4886]: I0219 21:11:01.906343 4886 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-w6sfm" Feb 19 21:11:01 crc kubenswrapper[4886]: I0219 21:11:01.906658 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 19 21:11:01 crc kubenswrapper[4886]: I0219 21:11:01.906853 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 19 21:11:01 crc kubenswrapper[4886]: I0219 21:11:01.915189 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 19 21:11:02 crc kubenswrapper[4886]: I0219 21:11:02.064750 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-394d18a2-68bd-4dee-b669-c624917566af\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-394d18a2-68bd-4dee-b669-c624917566af\") pod \"minio\" (UID: \"f0ff7ab1-3fdc-41a7-a4e6-d3793866ee28\") " pod="minio-dev/minio" Feb 19 21:11:02 crc kubenswrapper[4886]: I0219 21:11:02.065183 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pdtw\" (UniqueName: \"kubernetes.io/projected/f0ff7ab1-3fdc-41a7-a4e6-d3793866ee28-kube-api-access-9pdtw\") pod \"minio\" (UID: \"f0ff7ab1-3fdc-41a7-a4e6-d3793866ee28\") " pod="minio-dev/minio" Feb 19 21:11:02 crc kubenswrapper[4886]: I0219 21:11:02.166505 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-394d18a2-68bd-4dee-b669-c624917566af\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-394d18a2-68bd-4dee-b669-c624917566af\") pod \"minio\" (UID: \"f0ff7ab1-3fdc-41a7-a4e6-d3793866ee28\") " pod="minio-dev/minio" Feb 19 21:11:02 crc kubenswrapper[4886]: I0219 21:11:02.166599 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pdtw\" (UniqueName: \"kubernetes.io/projected/f0ff7ab1-3fdc-41a7-a4e6-d3793866ee28-kube-api-access-9pdtw\") pod \"minio\" (UID: \"f0ff7ab1-3fdc-41a7-a4e6-d3793866ee28\") " pod="minio-dev/minio" Feb 19 21:11:02 crc kubenswrapper[4886]: I0219 21:11:02.184253 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:11:02 crc kubenswrapper[4886]: I0219 21:11:02.184342 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-394d18a2-68bd-4dee-b669-c624917566af\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-394d18a2-68bd-4dee-b669-c624917566af\") pod \"minio\" (UID: \"f0ff7ab1-3fdc-41a7-a4e6-d3793866ee28\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ec9352971225643dde4c20b9178de73163eaaab2b703b0d6d14311aee56244ef/globalmount\"" pod="minio-dev/minio" Feb 19 21:11:02 crc kubenswrapper[4886]: I0219 21:11:02.201853 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pdtw\" (UniqueName: \"kubernetes.io/projected/f0ff7ab1-3fdc-41a7-a4e6-d3793866ee28-kube-api-access-9pdtw\") pod \"minio\" (UID: \"f0ff7ab1-3fdc-41a7-a4e6-d3793866ee28\") " pod="minio-dev/minio" Feb 19 21:11:02 crc kubenswrapper[4886]: I0219 21:11:02.217996 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-394d18a2-68bd-4dee-b669-c624917566af\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-394d18a2-68bd-4dee-b669-c624917566af\") pod \"minio\" (UID: \"f0ff7ab1-3fdc-41a7-a4e6-d3793866ee28\") " pod="minio-dev/minio" Feb 19 21:11:02 crc kubenswrapper[4886]: I0219 21:11:02.292744 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 19 21:11:02 crc kubenswrapper[4886]: I0219 21:11:02.538252 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 19 21:11:02 crc kubenswrapper[4886]: I0219 21:11:02.591557 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"f0ff7ab1-3fdc-41a7-a4e6-d3793866ee28","Type":"ContainerStarted","Data":"e4a3e7cb7683456a4e551d3a1cc729a70e4df3af7bb1d8aafaa1ab458e219b75"} Feb 19 21:11:06 crc kubenswrapper[4886]: I0219 21:11:06.627881 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"f0ff7ab1-3fdc-41a7-a4e6-d3793866ee28","Type":"ContainerStarted","Data":"83efe9d758ad9ff8664b49880566873436873bd720a3bab5d2571a017131654f"} Feb 19 21:11:06 crc kubenswrapper[4886]: I0219 21:11:06.647108 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.465171725 podStartE2EDuration="7.647079189s" podCreationTimestamp="2026-02-19 21:10:59 +0000 UTC" firstStartedPulling="2026-02-19 21:11:02.561791331 +0000 UTC m=+693.189634381" lastFinishedPulling="2026-02-19 21:11:05.743698765 +0000 UTC m=+696.371541845" observedRunningTime="2026-02-19 21:11:06.645536191 +0000 UTC m=+697.273379281" watchObservedRunningTime="2026-02-19 21:11:06.647079189 +0000 UTC m=+697.274922279" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.672663 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr"] Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.673870 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.680141 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.680853 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.681355 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-gwrqv" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.681636 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.688000 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.711762 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/7ace2275-5b80-431f-8fda-ca350848bc07-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-rrspr\" (UID: \"7ace2275-5b80-431f-8fda-ca350848bc07\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.711834 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/7ace2275-5b80-431f-8fda-ca350848bc07-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-rrspr\" (UID: \"7ace2275-5b80-431f-8fda-ca350848bc07\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.711907 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ace2275-5b80-431f-8fda-ca350848bc07-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-rrspr\" (UID: \"7ace2275-5b80-431f-8fda-ca350848bc07\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.711951 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh4cn\" (UniqueName: \"kubernetes.io/projected/7ace2275-5b80-431f-8fda-ca350848bc07-kube-api-access-zh4cn\") pod \"logging-loki-distributor-5d5548c9f5-rrspr\" (UID: \"7ace2275-5b80-431f-8fda-ca350848bc07\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.712031 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ace2275-5b80-431f-8fda-ca350848bc07-config\") pod \"logging-loki-distributor-5d5548c9f5-rrspr\" (UID: \"7ace2275-5b80-431f-8fda-ca350848bc07\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.750502 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr"] Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.813483 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ace2275-5b80-431f-8fda-ca350848bc07-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-rrspr\" (UID: \"7ace2275-5b80-431f-8fda-ca350848bc07\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.813552 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh4cn\" (UniqueName: \"kubernetes.io/projected/7ace2275-5b80-431f-8fda-ca350848bc07-kube-api-access-zh4cn\") pod \"logging-loki-distributor-5d5548c9f5-rrspr\" (UID: \"7ace2275-5b80-431f-8fda-ca350848bc07\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.813614 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ace2275-5b80-431f-8fda-ca350848bc07-config\") pod \"logging-loki-distributor-5d5548c9f5-rrspr\" (UID: \"7ace2275-5b80-431f-8fda-ca350848bc07\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.813714 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/7ace2275-5b80-431f-8fda-ca350848bc07-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-rrspr\" (UID: \"7ace2275-5b80-431f-8fda-ca350848bc07\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.813788 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/7ace2275-5b80-431f-8fda-ca350848bc07-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-rrspr\" (UID: \"7ace2275-5b80-431f-8fda-ca350848bc07\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.814901 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ace2275-5b80-431f-8fda-ca350848bc07-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-rrspr\" (UID: \"7ace2275-5b80-431f-8fda-ca350848bc07\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.815494 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ace2275-5b80-431f-8fda-ca350848bc07-config\") pod \"logging-loki-distributor-5d5548c9f5-rrspr\" (UID: \"7ace2275-5b80-431f-8fda-ca350848bc07\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.819825 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/7ace2275-5b80-431f-8fda-ca350848bc07-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-rrspr\" (UID: \"7ace2275-5b80-431f-8fda-ca350848bc07\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.833638 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/7ace2275-5b80-431f-8fda-ca350848bc07-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-rrspr\" (UID: \"7ace2275-5b80-431f-8fda-ca350848bc07\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.850121 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh4cn\" (UniqueName: \"kubernetes.io/projected/7ace2275-5b80-431f-8fda-ca350848bc07-kube-api-access-zh4cn\") pod \"logging-loki-distributor-5d5548c9f5-rrspr\" (UID: \"7ace2275-5b80-431f-8fda-ca350848bc07\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.868864 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-n662q"] Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.869701 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.873330 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.874010 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.874123 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.881048 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-n662q"] Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.918453 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p"] Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.918986 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc980d53-db1b-43e3-9922-ea78f89031d2-config\") pod \"logging-loki-querier-76bf7b6d45-n662q\" (UID: \"cc980d53-db1b-43e3-9922-ea78f89031d2\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.919044 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgm92\" (UniqueName: \"kubernetes.io/projected/cc980d53-db1b-43e3-9922-ea78f89031d2-kube-api-access-cgm92\") pod \"logging-loki-querier-76bf7b6d45-n662q\" (UID: \"cc980d53-db1b-43e3-9922-ea78f89031d2\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.919064 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/cc980d53-db1b-43e3-9922-ea78f89031d2-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-n662q\" (UID: \"cc980d53-db1b-43e3-9922-ea78f89031d2\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.919090 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/cc980d53-db1b-43e3-9922-ea78f89031d2-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-n662q\" (UID: \"cc980d53-db1b-43e3-9922-ea78f89031d2\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.919159 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/cc980d53-db1b-43e3-9922-ea78f89031d2-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-n662q\" (UID: \"cc980d53-db1b-43e3-9922-ea78f89031d2\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.919191 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc980d53-db1b-43e3-9922-ea78f89031d2-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-n662q\" (UID: \"cc980d53-db1b-43e3-9922-ea78f89031d2\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.919206 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.922149 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.922566 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Feb 19 21:11:11 crc kubenswrapper[4886]: I0219 21:11:11.927992 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p"] Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.015922 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.020787 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83c174ac-6edf-4973-b8d0-dc71b548f1c9-config\") pod \"logging-loki-query-frontend-6d6859c548-bx48p\" (UID: \"83c174ac-6edf-4973-b8d0-dc71b548f1c9\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.020838 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c174ac-6edf-4973-b8d0-dc71b548f1c9-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-bx48p\" (UID: \"83c174ac-6edf-4973-b8d0-dc71b548f1c9\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.020890 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/cc980d53-db1b-43e3-9922-ea78f89031d2-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-n662q\" (UID: \"cc980d53-db1b-43e3-9922-ea78f89031d2\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.020937 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/83c174ac-6edf-4973-b8d0-dc71b548f1c9-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-bx48p\" (UID: \"83c174ac-6edf-4973-b8d0-dc71b548f1c9\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.020976 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc980d53-db1b-43e3-9922-ea78f89031d2-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-n662q\" (UID: \"cc980d53-db1b-43e3-9922-ea78f89031d2\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.021014 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/83c174ac-6edf-4973-b8d0-dc71b548f1c9-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-bx48p\" (UID: \"83c174ac-6edf-4973-b8d0-dc71b548f1c9\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.021048 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc980d53-db1b-43e3-9922-ea78f89031d2-config\") pod \"logging-loki-querier-76bf7b6d45-n662q\" (UID: \"cc980d53-db1b-43e3-9922-ea78f89031d2\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.021091 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgm92\" (UniqueName: \"kubernetes.io/projected/cc980d53-db1b-43e3-9922-ea78f89031d2-kube-api-access-cgm92\") pod \"logging-loki-querier-76bf7b6d45-n662q\" (UID: \"cc980d53-db1b-43e3-9922-ea78f89031d2\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.021119 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/cc980d53-db1b-43e3-9922-ea78f89031d2-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-n662q\" (UID: \"cc980d53-db1b-43e3-9922-ea78f89031d2\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.021152 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/cc980d53-db1b-43e3-9922-ea78f89031d2-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-n662q\" (UID: \"cc980d53-db1b-43e3-9922-ea78f89031d2\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.021192 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sr4w\" (UniqueName: \"kubernetes.io/projected/83c174ac-6edf-4973-b8d0-dc71b548f1c9-kube-api-access-2sr4w\") pod \"logging-loki-query-frontend-6d6859c548-bx48p\" (UID: \"83c174ac-6edf-4973-b8d0-dc71b548f1c9\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.022136 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc980d53-db1b-43e3-9922-ea78f89031d2-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-n662q\" (UID: \"cc980d53-db1b-43e3-9922-ea78f89031d2\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.022686 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc980d53-db1b-43e3-9922-ea78f89031d2-config\") pod \"logging-loki-querier-76bf7b6d45-n662q\" (UID: \"cc980d53-db1b-43e3-9922-ea78f89031d2\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.024118 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/cc980d53-db1b-43e3-9922-ea78f89031d2-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-n662q\" (UID: \"cc980d53-db1b-43e3-9922-ea78f89031d2\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.025473 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/cc980d53-db1b-43e3-9922-ea78f89031d2-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-n662q\" (UID: \"cc980d53-db1b-43e3-9922-ea78f89031d2\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.026136 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/cc980d53-db1b-43e3-9922-ea78f89031d2-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-n662q\" (UID: \"cc980d53-db1b-43e3-9922-ea78f89031d2\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.054627 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgm92\" (UniqueName: \"kubernetes.io/projected/cc980d53-db1b-43e3-9922-ea78f89031d2-kube-api-access-cgm92\") pod \"logging-loki-querier-76bf7b6d45-n662q\" (UID: \"cc980d53-db1b-43e3-9922-ea78f89031d2\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.122278 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83c174ac-6edf-4973-b8d0-dc71b548f1c9-config\") pod \"logging-loki-query-frontend-6d6859c548-bx48p\" (UID: \"83c174ac-6edf-4973-b8d0-dc71b548f1c9\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.122329 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c174ac-6edf-4973-b8d0-dc71b548f1c9-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-bx48p\" (UID: \"83c174ac-6edf-4973-b8d0-dc71b548f1c9\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.122369 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/83c174ac-6edf-4973-b8d0-dc71b548f1c9-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-bx48p\" (UID: \"83c174ac-6edf-4973-b8d0-dc71b548f1c9\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.122411 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/83c174ac-6edf-4973-b8d0-dc71b548f1c9-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-bx48p\" (UID: \"83c174ac-6edf-4973-b8d0-dc71b548f1c9\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.122461 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sr4w\" (UniqueName: \"kubernetes.io/projected/83c174ac-6edf-4973-b8d0-dc71b548f1c9-kube-api-access-2sr4w\") pod \"logging-loki-query-frontend-6d6859c548-bx48p\" (UID: \"83c174ac-6edf-4973-b8d0-dc71b548f1c9\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.123499 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83c174ac-6edf-4973-b8d0-dc71b548f1c9-config\") pod \"logging-loki-query-frontend-6d6859c548-bx48p\" (UID: \"83c174ac-6edf-4973-b8d0-dc71b548f1c9\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.123801 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c174ac-6edf-4973-b8d0-dc71b548f1c9-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-bx48p\" (UID: \"83c174ac-6edf-4973-b8d0-dc71b548f1c9\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.124173 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-54d798b65b-ncwth"] Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.126845 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/83c174ac-6edf-4973-b8d0-dc71b548f1c9-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-bx48p\" (UID: \"83c174ac-6edf-4973-b8d0-dc71b548f1c9\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.127749 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/83c174ac-6edf-4973-b8d0-dc71b548f1c9-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-bx48p\" (UID: \"83c174ac-6edf-4973-b8d0-dc71b548f1c9\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.128127 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.132695 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.141029 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.144694 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.144694 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-k6gcf" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.148022 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.148404 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.190137 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sr4w\" (UniqueName: \"kubernetes.io/projected/83c174ac-6edf-4973-b8d0-dc71b548f1c9-kube-api-access-2sr4w\") pod \"logging-loki-query-frontend-6d6859c548-bx48p\" (UID: \"83c174ac-6edf-4973-b8d0-dc71b548f1c9\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.204762 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-54d798b65b-ncwth"] Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.207273 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.224412 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-54d798b65b-r4kj2"] Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.225548 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.226731 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8769p\" (UniqueName: \"kubernetes.io/projected/0749f3c7-3653-4491-a8b4-3327797bb266-kube-api-access-8769p\") pod \"logging-loki-gateway-54d798b65b-ncwth\" (UID: \"0749f3c7-3653-4491-a8b4-3327797bb266\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.226778 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0749f3c7-3653-4491-a8b4-3327797bb266-lokistack-gateway\") pod \"logging-loki-gateway-54d798b65b-ncwth\" (UID: \"0749f3c7-3653-4491-a8b4-3327797bb266\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.226802 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0749f3c7-3653-4491-a8b4-3327797bb266-rbac\") pod \"logging-loki-gateway-54d798b65b-ncwth\" (UID: \"0749f3c7-3653-4491-a8b4-3327797bb266\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.226836 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0749f3c7-3653-4491-a8b4-3327797bb266-tls-secret\") pod \"logging-loki-gateway-54d798b65b-ncwth\" (UID: \"0749f3c7-3653-4491-a8b4-3327797bb266\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.226869 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0749f3c7-3653-4491-a8b4-3327797bb266-tenants\") pod \"logging-loki-gateway-54d798b65b-ncwth\" (UID: \"0749f3c7-3653-4491-a8b4-3327797bb266\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.226888 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0749f3c7-3653-4491-a8b4-3327797bb266-logging-loki-ca-bundle\") pod \"logging-loki-gateway-54d798b65b-ncwth\" (UID: \"0749f3c7-3653-4491-a8b4-3327797bb266\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.226903 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0749f3c7-3653-4491-a8b4-3327797bb266-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-54d798b65b-ncwth\" (UID: \"0749f3c7-3653-4491-a8b4-3327797bb266\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.226921 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0749f3c7-3653-4491-a8b4-3327797bb266-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-54d798b65b-ncwth\" (UID: \"0749f3c7-3653-4491-a8b4-3327797bb266\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.232668 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-54d798b65b-r4kj2"] Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.273255 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.330217 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0749f3c7-3653-4491-a8b4-3327797bb266-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-54d798b65b-ncwth\" (UID: \"0749f3c7-3653-4491-a8b4-3327797bb266\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.330281 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0749f3c7-3653-4491-a8b4-3327797bb266-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-54d798b65b-ncwth\" (UID: \"0749f3c7-3653-4491-a8b4-3327797bb266\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.330311 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-54d798b65b-r4kj2\" (UID: \"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.330330 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8769p\" (UniqueName: \"kubernetes.io/projected/0749f3c7-3653-4491-a8b4-3327797bb266-kube-api-access-8769p\") pod \"logging-loki-gateway-54d798b65b-ncwth\" (UID: \"0749f3c7-3653-4491-a8b4-3327797bb266\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.330377 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-tls-secret\") pod \"logging-loki-gateway-54d798b65b-r4kj2\" (UID: \"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.330399 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-rbac\") pod \"logging-loki-gateway-54d798b65b-r4kj2\" (UID: \"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.331081 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0749f3c7-3653-4491-a8b4-3327797bb266-lokistack-gateway\") pod \"logging-loki-gateway-54d798b65b-ncwth\" (UID: \"0749f3c7-3653-4491-a8b4-3327797bb266\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.331132 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-logging-loki-ca-bundle\") pod \"logging-loki-gateway-54d798b65b-r4kj2\" (UID: \"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.331177 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-tenants\") pod \"logging-loki-gateway-54d798b65b-r4kj2\" (UID: \"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.331204 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0749f3c7-3653-4491-a8b4-3327797bb266-rbac\") pod \"logging-loki-gateway-54d798b65b-ncwth\" (UID: \"0749f3c7-3653-4491-a8b4-3327797bb266\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.331232 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zlfq\" (UniqueName: \"kubernetes.io/projected/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-kube-api-access-9zlfq\") pod \"logging-loki-gateway-54d798b65b-r4kj2\" (UID: \"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.331296 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-lokistack-gateway\") pod \"logging-loki-gateway-54d798b65b-r4kj2\" (UID: \"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.331331 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-54d798b65b-r4kj2\" (UID: \"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.331368 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0749f3c7-3653-4491-a8b4-3327797bb266-tls-secret\") pod \"logging-loki-gateway-54d798b65b-ncwth\" (UID: \"0749f3c7-3653-4491-a8b4-3327797bb266\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.331457 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0749f3c7-3653-4491-a8b4-3327797bb266-tenants\") pod \"logging-loki-gateway-54d798b65b-ncwth\" (UID: \"0749f3c7-3653-4491-a8b4-3327797bb266\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.331490 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0749f3c7-3653-4491-a8b4-3327797bb266-logging-loki-ca-bundle\") pod \"logging-loki-gateway-54d798b65b-ncwth\" (UID: \"0749f3c7-3653-4491-a8b4-3327797bb266\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.331598 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0749f3c7-3653-4491-a8b4-3327797bb266-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-54d798b65b-ncwth\" (UID: \"0749f3c7-3653-4491-a8b4-3327797bb266\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: E0219 21:11:12.331616 4886 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Feb 19 21:11:12 crc kubenswrapper[4886]: E0219 21:11:12.331679 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0749f3c7-3653-4491-a8b4-3327797bb266-tls-secret podName:0749f3c7-3653-4491-a8b4-3327797bb266 nodeName:}" failed. No retries permitted until 2026-02-19 21:11:12.831664394 +0000 UTC m=+703.459507444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/0749f3c7-3653-4491-a8b4-3327797bb266-tls-secret") pod "logging-loki-gateway-54d798b65b-ncwth" (UID: "0749f3c7-3653-4491-a8b4-3327797bb266") : secret "logging-loki-gateway-http" not found Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.332304 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0749f3c7-3653-4491-a8b4-3327797bb266-rbac\") pod \"logging-loki-gateway-54d798b65b-ncwth\" (UID: \"0749f3c7-3653-4491-a8b4-3327797bb266\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.332333 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0749f3c7-3653-4491-a8b4-3327797bb266-logging-loki-ca-bundle\") pod \"logging-loki-gateway-54d798b65b-ncwth\" (UID: \"0749f3c7-3653-4491-a8b4-3327797bb266\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.332455 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0749f3c7-3653-4491-a8b4-3327797bb266-lokistack-gateway\") pod \"logging-loki-gateway-54d798b65b-ncwth\" (UID: \"0749f3c7-3653-4491-a8b4-3327797bb266\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.336690 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0749f3c7-3653-4491-a8b4-3327797bb266-tenants\") pod \"logging-loki-gateway-54d798b65b-ncwth\" (UID: \"0749f3c7-3653-4491-a8b4-3327797bb266\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.337236 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0749f3c7-3653-4491-a8b4-3327797bb266-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-54d798b65b-ncwth\" (UID: \"0749f3c7-3653-4491-a8b4-3327797bb266\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.352230 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8769p\" (UniqueName: \"kubernetes.io/projected/0749f3c7-3653-4491-a8b4-3327797bb266-kube-api-access-8769p\") pod \"logging-loki-gateway-54d798b65b-ncwth\" (UID: \"0749f3c7-3653-4491-a8b4-3327797bb266\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.433211 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-logging-loki-ca-bundle\") pod \"logging-loki-gateway-54d798b65b-r4kj2\" (UID: \"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.433274 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-tenants\") pod \"logging-loki-gateway-54d798b65b-r4kj2\" (UID: \"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.433301 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zlfq\" (UniqueName: \"kubernetes.io/projected/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-kube-api-access-9zlfq\") pod \"logging-loki-gateway-54d798b65b-r4kj2\" (UID: \"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.433327 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-lokistack-gateway\") pod \"logging-loki-gateway-54d798b65b-r4kj2\" (UID: \"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.433348 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-54d798b65b-r4kj2\" (UID: \"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.433414 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-54d798b65b-r4kj2\" (UID: \"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.433443 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-tls-secret\") pod \"logging-loki-gateway-54d798b65b-r4kj2\" (UID: \"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.433462 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-rbac\") pod \"logging-loki-gateway-54d798b65b-r4kj2\" (UID: \"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.434049 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-logging-loki-ca-bundle\") pod \"logging-loki-gateway-54d798b65b-r4kj2\" (UID: \"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.434239 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-rbac\") pod \"logging-loki-gateway-54d798b65b-r4kj2\" (UID: \"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:12 crc kubenswrapper[4886]: E0219 21:11:12.434351 4886 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Feb 19 21:11:12 crc kubenswrapper[4886]: E0219 21:11:12.434409 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-tls-secret podName:ae5b30e8-ead2-44ca-bfdd-8e28b23ef040 nodeName:}" failed. No retries permitted until 2026-02-19 21:11:12.934395062 +0000 UTC m=+703.562238112 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-tls-secret") pod "logging-loki-gateway-54d798b65b-r4kj2" (UID: "ae5b30e8-ead2-44ca-bfdd-8e28b23ef040") : secret "logging-loki-gateway-http" not found Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.434672 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-54d798b65b-r4kj2\" (UID: \"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.434878 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-lokistack-gateway\") pod \"logging-loki-gateway-54d798b65b-r4kj2\" (UID: \"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.436355 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-tenants\") pod \"logging-loki-gateway-54d798b65b-r4kj2\" (UID: \"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.437895 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-54d798b65b-r4kj2\" (UID: \"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.460466 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zlfq\" (UniqueName: \"kubernetes.io/projected/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-kube-api-access-9zlfq\") pod \"logging-loki-gateway-54d798b65b-r4kj2\" (UID: \"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.668731 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr"] Feb 19 21:11:12 crc kubenswrapper[4886]: W0219 21:11:12.674041 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ace2275_5b80_431f_8fda_ca350848bc07.slice/crio-847085447fb8b5a1aa9a57a649129e3fec195e9c8ca2f1464e481066a2a0bb41 WatchSource:0}: Error finding container 847085447fb8b5a1aa9a57a649129e3fec195e9c8ca2f1464e481066a2a0bb41: Status 404 returned error can't find the container with id 847085447fb8b5a1aa9a57a649129e3fec195e9c8ca2f1464e481066a2a0bb41 Feb 19 21:11:12 crc kubenswrapper[4886]: W0219 21:11:12.740996 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc980d53_db1b_43e3_9922_ea78f89031d2.slice/crio-13f3fc1fbfac5665afea2e6ce3531ce4291c6de508bae83542af8054de694aac WatchSource:0}: Error finding container 13f3fc1fbfac5665afea2e6ce3531ce4291c6de508bae83542af8054de694aac: Status 404 returned error can't find the container with id 13f3fc1fbfac5665afea2e6ce3531ce4291c6de508bae83542af8054de694aac Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.743715 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-n662q"] Feb 19 21:11:12 crc kubenswrapper[4886]: W0219 21:11:12.780611 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83c174ac_6edf_4973_b8d0_dc71b548f1c9.slice/crio-a8c339e57799f68e669d93ebd81fca5f9b506c0578f11e241e510a8acca8c1b1 WatchSource:0}: Error finding container a8c339e57799f68e669d93ebd81fca5f9b506c0578f11e241e510a8acca8c1b1: Status 404 returned error can't find the container with id a8c339e57799f68e669d93ebd81fca5f9b506c0578f11e241e510a8acca8c1b1 Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.781517 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p"] Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.823733 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.824860 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.827418 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.828976 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.837003 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.839944 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0749f3c7-3653-4491-a8b4-3327797bb266-tls-secret\") pod \"logging-loki-gateway-54d798b65b-ncwth\" (UID: \"0749f3c7-3653-4491-a8b4-3327797bb266\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.848966 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0749f3c7-3653-4491-a8b4-3327797bb266-tls-secret\") pod \"logging-loki-gateway-54d798b65b-ncwth\" (UID: \"0749f3c7-3653-4491-a8b4-3327797bb266\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.910885 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.911916 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.914603 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.915047 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.918254 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.940981 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/b703c587-ef88-4e74-a6a5-c71a11625f76-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"b703c587-ef88-4e74-a6a5-c71a11625f76\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.941207 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b703c587-ef88-4e74-a6a5-c71a11625f76-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"b703c587-ef88-4e74-a6a5-c71a11625f76\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.941308 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-tls-secret\") pod \"logging-loki-gateway-54d798b65b-r4kj2\" (UID: \"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.941382 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/d1d2e425-8e15-48fc-a486-535954b89459-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.941472 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/d1d2e425-8e15-48fc-a486-535954b89459-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.941551 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ea5f5718-a565-4312-b1ca-896b0aa63e7e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ea5f5718-a565-4312-b1ca-896b0aa63e7e\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.941626 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d1d2e425-8e15-48fc-a486-535954b89459-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.941695 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1d2e425-8e15-48fc-a486-535954b89459-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.941803 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6mhm\" (UniqueName: \"kubernetes.io/projected/d1d2e425-8e15-48fc-a486-535954b89459-kube-api-access-f6mhm\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.941888 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b09d5471-a952-4c07-b49d-dba3d0a83931\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b09d5471-a952-4c07-b49d-dba3d0a83931\") pod \"logging-loki-compactor-0\" (UID: \"b703c587-ef88-4e74-a6a5-c71a11625f76\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.941960 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1d2e425-8e15-48fc-a486-535954b89459-config\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.942035 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9bvg\" (UniqueName: \"kubernetes.io/projected/b703c587-ef88-4e74-a6a5-c71a11625f76-kube-api-access-n9bvg\") pod \"logging-loki-compactor-0\" (UID: \"b703c587-ef88-4e74-a6a5-c71a11625f76\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.942123 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/b703c587-ef88-4e74-a6a5-c71a11625f76-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"b703c587-ef88-4e74-a6a5-c71a11625f76\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.942195 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-71d562e8-1061-44db-8d0a-0416974dc20c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71d562e8-1061-44db-8d0a-0416974dc20c\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.942325 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b703c587-ef88-4e74-a6a5-c71a11625f76-config\") pod \"logging-loki-compactor-0\" (UID: \"b703c587-ef88-4e74-a6a5-c71a11625f76\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.942418 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b703c587-ef88-4e74-a6a5-c71a11625f76-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"b703c587-ef88-4e74-a6a5-c71a11625f76\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:12 crc kubenswrapper[4886]: I0219 21:11:12.944498 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ae5b30e8-ead2-44ca-bfdd-8e28b23ef040-tls-secret\") pod \"logging-loki-gateway-54d798b65b-r4kj2\" (UID: \"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040\") " pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.029810 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.030608 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.032610 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.032759 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.041703 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.043689 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.043959 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/b703c587-ef88-4e74-a6a5-c71a11625f76-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"b703c587-ef88-4e74-a6a5-c71a11625f76\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.044010 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-20d2d5e2-3576-4cd0-be3a-bd168829f37a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-20d2d5e2-3576-4cd0-be3a-bd168829f37a\") pod \"logging-loki-index-gateway-0\" (UID: \"6ff2edc4-4f98-4d66-84f7-24a345741eec\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.044037 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b703c587-ef88-4e74-a6a5-c71a11625f76-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"b703c587-ef88-4e74-a6a5-c71a11625f76\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.044083 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b703c587-ef88-4e74-a6a5-c71a11625f76-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"b703c587-ef88-4e74-a6a5-c71a11625f76\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.044115 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ff2edc4-4f98-4d66-84f7-24a345741eec-config\") pod \"logging-loki-index-gateway-0\" (UID: \"6ff2edc4-4f98-4d66-84f7-24a345741eec\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.044141 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d1d2e425-8e15-48fc-a486-535954b89459-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.044175 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ff2edc4-4f98-4d66-84f7-24a345741eec-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"6ff2edc4-4f98-4d66-84f7-24a345741eec\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.044205 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b09d5471-a952-4c07-b49d-dba3d0a83931\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b09d5471-a952-4c07-b49d-dba3d0a83931\") pod \"logging-loki-compactor-0\" (UID: \"b703c587-ef88-4e74-a6a5-c71a11625f76\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.044240 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1d2e425-8e15-48fc-a486-535954b89459-config\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.044293 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-71d562e8-1061-44db-8d0a-0416974dc20c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71d562e8-1061-44db-8d0a-0416974dc20c\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.044320 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b703c587-ef88-4e74-a6a5-c71a11625f76-config\") pod \"logging-loki-compactor-0\" (UID: \"b703c587-ef88-4e74-a6a5-c71a11625f76\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.044361 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sdmn\" (UniqueName: \"kubernetes.io/projected/6ff2edc4-4f98-4d66-84f7-24a345741eec-kube-api-access-6sdmn\") pod \"logging-loki-index-gateway-0\" (UID: \"6ff2edc4-4f98-4d66-84f7-24a345741eec\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.044393 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/b703c587-ef88-4e74-a6a5-c71a11625f76-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"b703c587-ef88-4e74-a6a5-c71a11625f76\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.044426 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/d1d2e425-8e15-48fc-a486-535954b89459-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.044454 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/d1d2e425-8e15-48fc-a486-535954b89459-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.044481 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/6ff2edc4-4f98-4d66-84f7-24a345741eec-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"6ff2edc4-4f98-4d66-84f7-24a345741eec\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.044507 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ea5f5718-a565-4312-b1ca-896b0aa63e7e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ea5f5718-a565-4312-b1ca-896b0aa63e7e\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.044530 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1d2e425-8e15-48fc-a486-535954b89459-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.044556 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6mhm\" (UniqueName: \"kubernetes.io/projected/d1d2e425-8e15-48fc-a486-535954b89459-kube-api-access-f6mhm\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.044582 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6ff2edc4-4f98-4d66-84f7-24a345741eec-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"6ff2edc4-4f98-4d66-84f7-24a345741eec\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.044607 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/6ff2edc4-4f98-4d66-84f7-24a345741eec-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"6ff2edc4-4f98-4d66-84f7-24a345741eec\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.044648 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9bvg\" (UniqueName: \"kubernetes.io/projected/b703c587-ef88-4e74-a6a5-c71a11625f76-kube-api-access-n9bvg\") pod \"logging-loki-compactor-0\" (UID: \"b703c587-ef88-4e74-a6a5-c71a11625f76\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.045395 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1d2e425-8e15-48fc-a486-535954b89459-config\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.046216 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1d2e425-8e15-48fc-a486-535954b89459-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.046485 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b703c587-ef88-4e74-a6a5-c71a11625f76-config\") pod \"logging-loki-compactor-0\" (UID: \"b703c587-ef88-4e74-a6a5-c71a11625f76\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.048429 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/b703c587-ef88-4e74-a6a5-c71a11625f76-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"b703c587-ef88-4e74-a6a5-c71a11625f76\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.048782 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b703c587-ef88-4e74-a6a5-c71a11625f76-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"b703c587-ef88-4e74-a6a5-c71a11625f76\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.050644 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/d1d2e425-8e15-48fc-a486-535954b89459-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.051336 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/b703c587-ef88-4e74-a6a5-c71a11625f76-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"b703c587-ef88-4e74-a6a5-c71a11625f76\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.052074 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.052182 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b09d5471-a952-4c07-b49d-dba3d0a83931\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b09d5471-a952-4c07-b49d-dba3d0a83931\") pod \"logging-loki-compactor-0\" (UID: \"b703c587-ef88-4e74-a6a5-c71a11625f76\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/58dff75d71feac3d025b75ac6ea8bdeab4b5f65f4b5cb240fa861a3c308ad5cc/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.052076 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b703c587-ef88-4e74-a6a5-c71a11625f76-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"b703c587-ef88-4e74-a6a5-c71a11625f76\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.052568 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d1d2e425-8e15-48fc-a486-535954b89459-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.052779 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.052807 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ea5f5718-a565-4312-b1ca-896b0aa63e7e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ea5f5718-a565-4312-b1ca-896b0aa63e7e\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/81d2da1e20f64c729db4cb0916f1cdc62e7283f7dce71bdbcfa1b28707cc6ceb/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.061513 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/d1d2e425-8e15-48fc-a486-535954b89459-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.061805 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.061830 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-71d562e8-1061-44db-8d0a-0416974dc20c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71d562e8-1061-44db-8d0a-0416974dc20c\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/21548392c0f6bf960bc2f4cbafef81da346a67cdde3255fb5c023b8cce6a7369/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.067060 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6mhm\" (UniqueName: \"kubernetes.io/projected/d1d2e425-8e15-48fc-a486-535954b89459-kube-api-access-f6mhm\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.078035 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9bvg\" (UniqueName: \"kubernetes.io/projected/b703c587-ef88-4e74-a6a5-c71a11625f76-kube-api-access-n9bvg\") pod \"logging-loki-compactor-0\" (UID: \"b703c587-ef88-4e74-a6a5-c71a11625f76\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.098455 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b09d5471-a952-4c07-b49d-dba3d0a83931\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b09d5471-a952-4c07-b49d-dba3d0a83931\") pod \"logging-loki-compactor-0\" (UID: \"b703c587-ef88-4e74-a6a5-c71a11625f76\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.101834 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-71d562e8-1061-44db-8d0a-0416974dc20c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71d562e8-1061-44db-8d0a-0416974dc20c\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.113199 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ea5f5718-a565-4312-b1ca-896b0aa63e7e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ea5f5718-a565-4312-b1ca-896b0aa63e7e\") pod \"logging-loki-ingester-0\" (UID: \"d1d2e425-8e15-48fc-a486-535954b89459\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.146228 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sdmn\" (UniqueName: \"kubernetes.io/projected/6ff2edc4-4f98-4d66-84f7-24a345741eec-kube-api-access-6sdmn\") pod \"logging-loki-index-gateway-0\" (UID: \"6ff2edc4-4f98-4d66-84f7-24a345741eec\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.146286 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/6ff2edc4-4f98-4d66-84f7-24a345741eec-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"6ff2edc4-4f98-4d66-84f7-24a345741eec\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.146315 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6ff2edc4-4f98-4d66-84f7-24a345741eec-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"6ff2edc4-4f98-4d66-84f7-24a345741eec\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.146331 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/6ff2edc4-4f98-4d66-84f7-24a345741eec-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"6ff2edc4-4f98-4d66-84f7-24a345741eec\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.146370 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-20d2d5e2-3576-4cd0-be3a-bd168829f37a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-20d2d5e2-3576-4cd0-be3a-bd168829f37a\") pod \"logging-loki-index-gateway-0\" (UID: \"6ff2edc4-4f98-4d66-84f7-24a345741eec\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.146409 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ff2edc4-4f98-4d66-84f7-24a345741eec-config\") pod \"logging-loki-index-gateway-0\" (UID: \"6ff2edc4-4f98-4d66-84f7-24a345741eec\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.146434 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ff2edc4-4f98-4d66-84f7-24a345741eec-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"6ff2edc4-4f98-4d66-84f7-24a345741eec\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.147456 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ff2edc4-4f98-4d66-84f7-24a345741eec-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"6ff2edc4-4f98-4d66-84f7-24a345741eec\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.148091 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ff2edc4-4f98-4d66-84f7-24a345741eec-config\") pod \"logging-loki-index-gateway-0\" (UID: \"6ff2edc4-4f98-4d66-84f7-24a345741eec\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.151538 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6ff2edc4-4f98-4d66-84f7-24a345741eec-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"6ff2edc4-4f98-4d66-84f7-24a345741eec\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.154209 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.154231 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-20d2d5e2-3576-4cd0-be3a-bd168829f37a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-20d2d5e2-3576-4cd0-be3a-bd168829f37a\") pod \"logging-loki-index-gateway-0\" (UID: \"6ff2edc4-4f98-4d66-84f7-24a345741eec\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7418291ef7833f68e4dd53e83099ce2e6df98b0f065cd30b0135d9c5f8b16262/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.161899 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/6ff2edc4-4f98-4d66-84f7-24a345741eec-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"6ff2edc4-4f98-4d66-84f7-24a345741eec\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.162120 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sdmn\" (UniqueName: \"kubernetes.io/projected/6ff2edc4-4f98-4d66-84f7-24a345741eec-kube-api-access-6sdmn\") pod \"logging-loki-index-gateway-0\" (UID: \"6ff2edc4-4f98-4d66-84f7-24a345741eec\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.162370 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/6ff2edc4-4f98-4d66-84f7-24a345741eec-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"6ff2edc4-4f98-4d66-84f7-24a345741eec\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.168047 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.182998 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-20d2d5e2-3576-4cd0-be3a-bd168829f37a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-20d2d5e2-3576-4cd0-be3a-bd168829f37a\") pod \"logging-loki-index-gateway-0\" (UID: \"6ff2edc4-4f98-4d66-84f7-24a345741eec\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.191469 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.230805 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.330143 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-54d798b65b-ncwth"] Feb 19 21:11:13 crc kubenswrapper[4886]: W0219 21:11:13.336525 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0749f3c7_3653_4491_a8b4_3327797bb266.slice/crio-68cdbfcf5968f1a77ca256afdbc92708ecf85a793ff7f041930881f050b533dc WatchSource:0}: Error finding container 68cdbfcf5968f1a77ca256afdbc92708ecf85a793ff7f041930881f050b533dc: Status 404 returned error can't find the container with id 68cdbfcf5968f1a77ca256afdbc92708ecf85a793ff7f041930881f050b533dc Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.435883 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.620037 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-54d798b65b-r4kj2"] Feb 19 21:11:13 crc kubenswrapper[4886]: W0219 21:11:13.622432 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae5b30e8_ead2_44ca_bfdd_8e28b23ef040.slice/crio-032f5a4a37b98093e20d849eaccf531041edcb874d674a41c07212435c4221de WatchSource:0}: Error finding container 032f5a4a37b98093e20d849eaccf531041edcb874d674a41c07212435c4221de: Status 404 returned error can't find the container with id 032f5a4a37b98093e20d849eaccf531041edcb874d674a41c07212435c4221de Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.684124 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" event={"ID":"cc980d53-db1b-43e3-9922-ea78f89031d2","Type":"ContainerStarted","Data":"13f3fc1fbfac5665afea2e6ce3531ce4291c6de508bae83542af8054de694aac"} Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.685938 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" event={"ID":"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040","Type":"ContainerStarted","Data":"032f5a4a37b98093e20d849eaccf531041edcb874d674a41c07212435c4221de"} Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.687177 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p" event={"ID":"83c174ac-6edf-4973-b8d0-dc71b548f1c9","Type":"ContainerStarted","Data":"a8c339e57799f68e669d93ebd81fca5f9b506c0578f11e241e510a8acca8c1b1"} Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.688233 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr" event={"ID":"7ace2275-5b80-431f-8fda-ca350848bc07","Type":"ContainerStarted","Data":"847085447fb8b5a1aa9a57a649129e3fec195e9c8ca2f1464e481066a2a0bb41"} Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.688984 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" event={"ID":"0749f3c7-3653-4491-a8b4-3327797bb266","Type":"ContainerStarted","Data":"68cdbfcf5968f1a77ca256afdbc92708ecf85a793ff7f041930881f050b533dc"} Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.703185 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 19 21:11:13 crc kubenswrapper[4886]: W0219 21:11:13.712374 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1d2e425_8e15_48fc_a486_535954b89459.slice/crio-49430f1b581aad19a81d5df353ba2b8203bb53bec51cbb7c7fc1232e11a729b9 WatchSource:0}: Error finding container 49430f1b581aad19a81d5df353ba2b8203bb53bec51cbb7c7fc1232e11a729b9: Status 404 returned error can't find the container with id 49430f1b581aad19a81d5df353ba2b8203bb53bec51cbb7c7fc1232e11a729b9 Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.758126 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 19 21:11:13 crc kubenswrapper[4886]: W0219 21:11:13.765969 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb703c587_ef88_4e74_a6a5_c71a11625f76.slice/crio-381b507e079288e9fb385941899137a208b75d9d491138855180e0a4519b8145 WatchSource:0}: Error finding container 381b507e079288e9fb385941899137a208b75d9d491138855180e0a4519b8145: Status 404 returned error can't find the container with id 381b507e079288e9fb385941899137a208b75d9d491138855180e0a4519b8145 Feb 19 21:11:13 crc kubenswrapper[4886]: I0219 21:11:13.843990 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 19 21:11:14 crc kubenswrapper[4886]: I0219 21:11:14.697585 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"6ff2edc4-4f98-4d66-84f7-24a345741eec","Type":"ContainerStarted","Data":"b88e7816d82d68dd07335d2206c25b6c1024fa7bd9a8f938ace28d59b3563636"} Feb 19 21:11:14 crc kubenswrapper[4886]: I0219 21:11:14.700440 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"d1d2e425-8e15-48fc-a486-535954b89459","Type":"ContainerStarted","Data":"49430f1b581aad19a81d5df353ba2b8203bb53bec51cbb7c7fc1232e11a729b9"} Feb 19 21:11:14 crc kubenswrapper[4886]: I0219 21:11:14.703245 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"b703c587-ef88-4e74-a6a5-c71a11625f76","Type":"ContainerStarted","Data":"381b507e079288e9fb385941899137a208b75d9d491138855180e0a4519b8145"} Feb 19 21:11:16 crc kubenswrapper[4886]: I0219 21:11:16.723467 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr" event={"ID":"7ace2275-5b80-431f-8fda-ca350848bc07","Type":"ContainerStarted","Data":"06b8a2fd0c5c47a70361e496a39cdc00da2f49de7f44046d92a614526e79e7ac"} Feb 19 21:11:16 crc kubenswrapper[4886]: I0219 21:11:16.725132 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr" Feb 19 21:11:16 crc kubenswrapper[4886]: I0219 21:11:16.746832 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr" podStartSLOduration=1.960761861 podStartE2EDuration="5.746811497s" podCreationTimestamp="2026-02-19 21:11:11 +0000 UTC" firstStartedPulling="2026-02-19 21:11:12.675731559 +0000 UTC m=+703.303574609" lastFinishedPulling="2026-02-19 21:11:16.461781155 +0000 UTC m=+707.089624245" observedRunningTime="2026-02-19 21:11:16.741708931 +0000 UTC m=+707.369551991" watchObservedRunningTime="2026-02-19 21:11:16.746811497 +0000 UTC m=+707.374654557" Feb 19 21:11:17 crc kubenswrapper[4886]: I0219 21:11:17.741075 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" event={"ID":"0749f3c7-3653-4491-a8b4-3327797bb266","Type":"ContainerStarted","Data":"5cff08bdc0e2e7e50061a27cc4a24d80d0d7585ed62b19251eb362f433ac5cc9"} Feb 19 21:11:17 crc kubenswrapper[4886]: I0219 21:11:17.743603 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"6ff2edc4-4f98-4d66-84f7-24a345741eec","Type":"ContainerStarted","Data":"9dedba1db0063f0a28426ec27d19760d0174a571ae5d30d501ed4284900ff68c"} Feb 19 21:11:17 crc kubenswrapper[4886]: I0219 21:11:17.743781 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:17 crc kubenswrapper[4886]: I0219 21:11:17.746797 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" event={"ID":"cc980d53-db1b-43e3-9922-ea78f89031d2","Type":"ContainerStarted","Data":"907540c1ee69bb1a00f20949f5f5d7ba38224a27286b56d3aa4c0f481f357a31"} Feb 19 21:11:17 crc kubenswrapper[4886]: I0219 21:11:17.747114 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" Feb 19 21:11:17 crc kubenswrapper[4886]: I0219 21:11:17.749590 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"d1d2e425-8e15-48fc-a486-535954b89459","Type":"ContainerStarted","Data":"2211a91a303490b05991b33a30cde2128f40f3c9bc34a5474a9bf6d15281e24b"} Feb 19 21:11:17 crc kubenswrapper[4886]: I0219 21:11:17.749775 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:11:17 crc kubenswrapper[4886]: I0219 21:11:17.751990 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" event={"ID":"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040","Type":"ContainerStarted","Data":"97f86958ae88ead73fc8e47441fa1192ea945cbbb60385d04704cb4629d5c626"} Feb 19 21:11:17 crc kubenswrapper[4886]: I0219 21:11:17.754326 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"b703c587-ef88-4e74-a6a5-c71a11625f76","Type":"ContainerStarted","Data":"896fa67f8d2139538395c9683a0c9a5aeb295e8a1554de46a4ded4daeadd093b"} Feb 19 21:11:17 crc kubenswrapper[4886]: I0219 21:11:17.754891 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:17 crc kubenswrapper[4886]: I0219 21:11:17.759393 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p" event={"ID":"83c174ac-6edf-4973-b8d0-dc71b548f1c9","Type":"ContainerStarted","Data":"59143d338af1701ab076d7b3f27332802fd08088893a8d4e2b8f17d4e7c84f40"} Feb 19 21:11:17 crc kubenswrapper[4886]: I0219 21:11:17.759430 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p" Feb 19 21:11:17 crc kubenswrapper[4886]: I0219 21:11:17.813365 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p" podStartSLOduration=3.133909603 podStartE2EDuration="6.813336626s" podCreationTimestamp="2026-02-19 21:11:11 +0000 UTC" firstStartedPulling="2026-02-19 21:11:12.782122457 +0000 UTC m=+703.409965517" lastFinishedPulling="2026-02-19 21:11:16.46154948 +0000 UTC m=+707.089392540" observedRunningTime="2026-02-19 21:11:17.802335805 +0000 UTC m=+708.430178865" watchObservedRunningTime="2026-02-19 21:11:17.813336626 +0000 UTC m=+708.441179716" Feb 19 21:11:17 crc kubenswrapper[4886]: I0219 21:11:17.817942 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=4.287602126 podStartE2EDuration="6.817868697s" podCreationTimestamp="2026-02-19 21:11:11 +0000 UTC" firstStartedPulling="2026-02-19 21:11:13.849745272 +0000 UTC m=+704.477588342" lastFinishedPulling="2026-02-19 21:11:16.380011853 +0000 UTC m=+707.007854913" observedRunningTime="2026-02-19 21:11:17.772007399 +0000 UTC m=+708.399850459" watchObservedRunningTime="2026-02-19 21:11:17.817868697 +0000 UTC m=+708.445711787" Feb 19 21:11:17 crc kubenswrapper[4886]: I0219 21:11:17.830662 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=4.078757708 podStartE2EDuration="6.830643482s" podCreationTimestamp="2026-02-19 21:11:11 +0000 UTC" firstStartedPulling="2026-02-19 21:11:13.715194612 +0000 UTC m=+704.343037662" lastFinishedPulling="2026-02-19 21:11:16.467080336 +0000 UTC m=+707.094923436" observedRunningTime="2026-02-19 21:11:17.819442986 +0000 UTC m=+708.447286076" watchObservedRunningTime="2026-02-19 21:11:17.830643482 +0000 UTC m=+708.458486532" Feb 19 21:11:17 crc kubenswrapper[4886]: I0219 21:11:17.857530 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=4.140583079 podStartE2EDuration="6.857504523s" podCreationTimestamp="2026-02-19 21:11:11 +0000 UTC" firstStartedPulling="2026-02-19 21:11:13.770279797 +0000 UTC m=+704.398122857" lastFinishedPulling="2026-02-19 21:11:16.487201251 +0000 UTC m=+707.115044301" observedRunningTime="2026-02-19 21:11:17.842459822 +0000 UTC m=+708.470302892" watchObservedRunningTime="2026-02-19 21:11:17.857504523 +0000 UTC m=+708.485347603" Feb 19 21:11:17 crc kubenswrapper[4886]: I0219 21:11:17.872670 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" podStartSLOduration=3.1540645290000002 podStartE2EDuration="6.872626055s" podCreationTimestamp="2026-02-19 21:11:11 +0000 UTC" firstStartedPulling="2026-02-19 21:11:12.743070716 +0000 UTC m=+703.370913766" lastFinishedPulling="2026-02-19 21:11:16.461632202 +0000 UTC m=+707.089475292" observedRunningTime="2026-02-19 21:11:17.866205347 +0000 UTC m=+708.494048407" watchObservedRunningTime="2026-02-19 21:11:17.872626055 +0000 UTC m=+708.500469115" Feb 19 21:11:18 crc kubenswrapper[4886]: I0219 21:11:18.325145 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:11:18 crc kubenswrapper[4886]: I0219 21:11:18.325211 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:11:19 crc kubenswrapper[4886]: I0219 21:11:19.779077 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" event={"ID":"0749f3c7-3653-4491-a8b4-3327797bb266","Type":"ContainerStarted","Data":"8d38f3998dfc68f67088df772223015776c057972734099d143d8c5cd62da7bc"} Feb 19 21:11:19 crc kubenswrapper[4886]: I0219 21:11:19.780838 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:19 crc kubenswrapper[4886]: I0219 21:11:19.781192 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:19 crc kubenswrapper[4886]: I0219 21:11:19.783007 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" event={"ID":"ae5b30e8-ead2-44ca-bfdd-8e28b23ef040","Type":"ContainerStarted","Data":"258c4d9c6ffdbeb0432df964c7e6300a07df5ccce9bd1164094ade3b537270b9"} Feb 19 21:11:19 crc kubenswrapper[4886]: I0219 21:11:19.783779 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:19 crc kubenswrapper[4886]: I0219 21:11:19.784247 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:19 crc kubenswrapper[4886]: I0219 21:11:19.799835 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:19 crc kubenswrapper[4886]: I0219 21:11:19.801860 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:19 crc kubenswrapper[4886]: I0219 21:11:19.802345 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" Feb 19 21:11:19 crc kubenswrapper[4886]: I0219 21:11:19.818445 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" Feb 19 21:11:19 crc kubenswrapper[4886]: I0219 21:11:19.855828 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" podStartSLOduration=2.575450607 podStartE2EDuration="7.855806066s" podCreationTimestamp="2026-02-19 21:11:12 +0000 UTC" firstStartedPulling="2026-02-19 21:11:13.343229081 +0000 UTC m=+703.971072131" lastFinishedPulling="2026-02-19 21:11:18.62358454 +0000 UTC m=+709.251427590" observedRunningTime="2026-02-19 21:11:19.853793867 +0000 UTC m=+710.481636927" watchObservedRunningTime="2026-02-19 21:11:19.855806066 +0000 UTC m=+710.483649136" Feb 19 21:11:19 crc kubenswrapper[4886]: I0219 21:11:19.906804 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" podStartSLOduration=2.906905521 podStartE2EDuration="7.90678198s" podCreationTimestamp="2026-02-19 21:11:12 +0000 UTC" firstStartedPulling="2026-02-19 21:11:13.62448085 +0000 UTC m=+704.252323920" lastFinishedPulling="2026-02-19 21:11:18.624357319 +0000 UTC m=+709.252200379" observedRunningTime="2026-02-19 21:11:19.904832622 +0000 UTC m=+710.532675672" watchObservedRunningTime="2026-02-19 21:11:19.90678198 +0000 UTC m=+710.534625050" Feb 19 21:11:32 crc kubenswrapper[4886]: I0219 21:11:32.031543 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr" Feb 19 21:11:32 crc kubenswrapper[4886]: I0219 21:11:32.215064 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" Feb 19 21:11:32 crc kubenswrapper[4886]: I0219 21:11:32.281728 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p" Feb 19 21:11:33 crc kubenswrapper[4886]: I0219 21:11:33.200929 4886 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 19 21:11:33 crc kubenswrapper[4886]: I0219 21:11:33.201052 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="d1d2e425-8e15-48fc-a486-535954b89459" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 19 21:11:33 crc kubenswrapper[4886]: I0219 21:11:33.243236 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 19 21:11:33 crc kubenswrapper[4886]: I0219 21:11:33.444461 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 21:11:43 crc kubenswrapper[4886]: I0219 21:11:43.200105 4886 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 19 21:11:43 crc kubenswrapper[4886]: I0219 21:11:43.200970 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="d1d2e425-8e15-48fc-a486-535954b89459" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 19 21:11:48 crc kubenswrapper[4886]: I0219 21:11:48.324791 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:11:48 crc kubenswrapper[4886]: I0219 21:11:48.325175 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:11:53 crc kubenswrapper[4886]: I0219 21:11:53.199402 4886 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 19 21:11:53 crc kubenswrapper[4886]: I0219 21:11:53.200072 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="d1d2e425-8e15-48fc-a486-535954b89459" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 19 21:12:01 crc kubenswrapper[4886]: I0219 21:12:01.543307 4886 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 21:12:03 crc kubenswrapper[4886]: I0219 21:12:03.195490 4886 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 19 21:12:03 crc kubenswrapper[4886]: I0219 21:12:03.195544 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="d1d2e425-8e15-48fc-a486-535954b89459" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 19 21:12:13 crc kubenswrapper[4886]: I0219 21:12:13.198554 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 19 21:12:18 crc kubenswrapper[4886]: I0219 21:12:18.324927 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:12:18 crc kubenswrapper[4886]: I0219 21:12:18.325499 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:12:18 crc kubenswrapper[4886]: I0219 21:12:18.325578 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 21:12:18 crc kubenswrapper[4886]: I0219 21:12:18.327233 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30855271763aa03f95502a62cdac34f3aaa5896ebbe6fec54e402ff34f490d71"} pod="openshift-machine-config-operator/machine-config-daemon-6stm5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 21:12:18 crc kubenswrapper[4886]: I0219 21:12:18.327378 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" containerID="cri-o://30855271763aa03f95502a62cdac34f3aaa5896ebbe6fec54e402ff34f490d71" gracePeriod=600 Feb 19 21:12:19 crc kubenswrapper[4886]: I0219 21:12:19.434031 4886 generic.go:334] "Generic (PLEG): container finished" podID="b096c32d-4192-4529-bc55-b05d09004007" containerID="30855271763aa03f95502a62cdac34f3aaa5896ebbe6fec54e402ff34f490d71" exitCode=0 Feb 19 21:12:19 crc kubenswrapper[4886]: I0219 21:12:19.434113 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerDied","Data":"30855271763aa03f95502a62cdac34f3aaa5896ebbe6fec54e402ff34f490d71"} Feb 19 21:12:19 crc kubenswrapper[4886]: I0219 21:12:19.434635 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerStarted","Data":"8df8acb7039f9357ec20617d0239697fac24843c97f7ed406d68afe9849d624e"} Feb 19 21:12:19 crc kubenswrapper[4886]: I0219 21:12:19.434710 4886 scope.go:117] "RemoveContainer" containerID="5ae729cd998c06e3f4a3bc7ed90125f52db03861def52b8d2aacbec9bd8a3520" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.496783 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-nsr8h"] Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.498934 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.502520 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.509709 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.510495 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-lj8lw" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.511390 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.511823 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.521039 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-nsr8h"] Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.524183 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.589801 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-nsr8h"] Feb 19 21:12:31 crc kubenswrapper[4886]: E0219 21:12:31.590530 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-hrv4j metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-hrv4j metrics sa-token tmp trusted-ca]: context canceled" pod="openshift-logging/collector-nsr8h" podUID="76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.685339 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-tmp\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.685431 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-collector-token\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.685459 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-collector-syslog-receiver\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.685489 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-sa-token\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.685520 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-config\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.685575 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-trusted-ca\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.685596 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-metrics\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.685629 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-datadir\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.685648 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrv4j\" (UniqueName: \"kubernetes.io/projected/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-kube-api-access-hrv4j\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.685746 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-entrypoint\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.685798 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-config-openshift-service-cacrt\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.787404 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-tmp\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.787616 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-collector-token\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.787672 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-collector-syslog-receiver\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.787717 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-sa-token\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.787765 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-config\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.787821 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-trusted-ca\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.787862 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-metrics\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.787927 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-datadir\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.787973 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrv4j\" (UniqueName: \"kubernetes.io/projected/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-kube-api-access-hrv4j\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.788064 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-entrypoint\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.788100 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-datadir\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.788200 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-config-openshift-service-cacrt\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.788634 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-config\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.788839 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-entrypoint\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.788987 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-trusted-ca\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.789041 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-config-openshift-service-cacrt\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.792863 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-collector-token\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.793701 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-collector-syslog-receiver\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.794043 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-metrics\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.794577 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-tmp\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.805169 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrv4j\" (UniqueName: \"kubernetes.io/projected/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-kube-api-access-hrv4j\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:31 crc kubenswrapper[4886]: I0219 21:12:31.810977 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-sa-token\") pod \"collector-nsr8h\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " pod="openshift-logging/collector-nsr8h" Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.546168 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-nsr8h" Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.576778 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-nsr8h" Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.703161 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-trusted-ca\") pod \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.703213 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-collector-token\") pod \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.703249 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-config-openshift-service-cacrt\") pod \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.703299 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-collector-syslog-receiver\") pod \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.703371 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-metrics\") pod \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.703399 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrv4j\" (UniqueName: \"kubernetes.io/projected/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-kube-api-access-hrv4j\") pod \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.703418 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-entrypoint\") pod \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.703465 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-datadir\") pod \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.703533 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-sa-token\") pod \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.703618 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-tmp\") pod \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.703691 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-config\") pod \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\" (UID: \"76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2\") " Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.703799 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2" (UID: "76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.703871 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-datadir" (OuterVolumeSpecName: "datadir") pod "76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2" (UID: "76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.704434 4886 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-datadir\") on node \"crc\" DevicePath \"\"" Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.704420 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2" (UID: "76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.704455 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.704679 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2" (UID: "76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.705415 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-config" (OuterVolumeSpecName: "config") pod "76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2" (UID: "76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.708117 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2" (UID: "76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.709727 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-sa-token" (OuterVolumeSpecName: "sa-token") pod "76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2" (UID: "76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.711156 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-kube-api-access-hrv4j" (OuterVolumeSpecName: "kube-api-access-hrv4j") pod "76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2" (UID: "76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2"). InnerVolumeSpecName "kube-api-access-hrv4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.711933 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-metrics" (OuterVolumeSpecName: "metrics") pod "76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2" (UID: "76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.712311 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-tmp" (OuterVolumeSpecName: "tmp") pod "76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2" (UID: "76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.714029 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-collector-token" (OuterVolumeSpecName: "collector-token") pod "76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2" (UID: "76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.806536 4886 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.806589 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrv4j\" (UniqueName: \"kubernetes.io/projected/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-kube-api-access-hrv4j\") on node \"crc\" DevicePath \"\"" Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.806613 4886 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-entrypoint\") on node \"crc\" DevicePath \"\"" Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.806634 4886 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.806658 4886 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.806681 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.806704 4886 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-collector-token\") on node \"crc\" DevicePath \"\"" Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.806728 4886 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Feb 19 21:12:32 crc kubenswrapper[4886]: I0219 21:12:32.806752 4886 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.555573 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-nsr8h" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.636214 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-nsr8h"] Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.646686 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-nsr8h"] Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.676770 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-2tm5m"] Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.678795 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.681126 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-lj8lw" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.682393 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.691150 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.691557 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.693041 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.694440 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-2tm5m"] Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.698896 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.824920 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/97aad659-4020-4e3c-9283-85da4027ab63-tmp\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.825040 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/97aad659-4020-4e3c-9283-85da4027ab63-entrypoint\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.825104 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/97aad659-4020-4e3c-9283-85da4027ab63-metrics\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.825149 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/97aad659-4020-4e3c-9283-85da4027ab63-config-openshift-service-cacrt\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.825218 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/97aad659-4020-4e3c-9283-85da4027ab63-collector-syslog-receiver\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.825301 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/97aad659-4020-4e3c-9283-85da4027ab63-sa-token\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.825355 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/97aad659-4020-4e3c-9283-85da4027ab63-datadir\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.825489 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97aad659-4020-4e3c-9283-85da4027ab63-trusted-ca\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.825551 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p5v8\" (UniqueName: \"kubernetes.io/projected/97aad659-4020-4e3c-9283-85da4027ab63-kube-api-access-6p5v8\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.825604 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/97aad659-4020-4e3c-9283-85da4027ab63-collector-token\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.825673 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97aad659-4020-4e3c-9283-85da4027ab63-config\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.926870 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97aad659-4020-4e3c-9283-85da4027ab63-trusted-ca\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.926957 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p5v8\" (UniqueName: \"kubernetes.io/projected/97aad659-4020-4e3c-9283-85da4027ab63-kube-api-access-6p5v8\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.927013 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/97aad659-4020-4e3c-9283-85da4027ab63-collector-token\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.927084 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97aad659-4020-4e3c-9283-85da4027ab63-config\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.927126 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/97aad659-4020-4e3c-9283-85da4027ab63-tmp\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.928643 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97aad659-4020-4e3c-9283-85da4027ab63-config\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.928789 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97aad659-4020-4e3c-9283-85da4027ab63-trusted-ca\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.929214 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/97aad659-4020-4e3c-9283-85da4027ab63-entrypoint\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.929379 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/97aad659-4020-4e3c-9283-85da4027ab63-metrics\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.929448 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/97aad659-4020-4e3c-9283-85da4027ab63-config-openshift-service-cacrt\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.929496 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/97aad659-4020-4e3c-9283-85da4027ab63-collector-syslog-receiver\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.929582 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/97aad659-4020-4e3c-9283-85da4027ab63-sa-token\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.929663 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/97aad659-4020-4e3c-9283-85da4027ab63-datadir\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.929864 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/97aad659-4020-4e3c-9283-85da4027ab63-datadir\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.931567 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/97aad659-4020-4e3c-9283-85da4027ab63-config-openshift-service-cacrt\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.937564 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/97aad659-4020-4e3c-9283-85da4027ab63-tmp\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.937849 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/97aad659-4020-4e3c-9283-85da4027ab63-entrypoint\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.938458 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/97aad659-4020-4e3c-9283-85da4027ab63-collector-token\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.938939 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/97aad659-4020-4e3c-9283-85da4027ab63-metrics\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.939087 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/97aad659-4020-4e3c-9283-85da4027ab63-collector-syslog-receiver\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.960512 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p5v8\" (UniqueName: \"kubernetes.io/projected/97aad659-4020-4e3c-9283-85da4027ab63-kube-api-access-6p5v8\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:33 crc kubenswrapper[4886]: I0219 21:12:33.961873 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/97aad659-4020-4e3c-9283-85da4027ab63-sa-token\") pod \"collector-2tm5m\" (UID: \"97aad659-4020-4e3c-9283-85da4027ab63\") " pod="openshift-logging/collector-2tm5m" Feb 19 21:12:34 crc kubenswrapper[4886]: I0219 21:12:34.006466 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-2tm5m" Feb 19 21:12:34 crc kubenswrapper[4886]: I0219 21:12:34.286781 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-2tm5m"] Feb 19 21:12:34 crc kubenswrapper[4886]: I0219 21:12:34.564121 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-2tm5m" event={"ID":"97aad659-4020-4e3c-9283-85da4027ab63","Type":"ContainerStarted","Data":"2ae92a2d9a4fb0021676425a940a9b61676c10974f716a72cbbb133182adb35e"} Feb 19 21:12:34 crc kubenswrapper[4886]: I0219 21:12:34.618129 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2" path="/var/lib/kubelet/pods/76a5a14d-65eb-4a8b-9d42-86e2b6ac8cf2/volumes" Feb 19 21:12:41 crc kubenswrapper[4886]: I0219 21:12:41.620662 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-2tm5m" event={"ID":"97aad659-4020-4e3c-9283-85da4027ab63","Type":"ContainerStarted","Data":"e1c355afad0d67f9f47c8549223795e1648d8f1012c30265a2cc07cfa59bc20b"} Feb 19 21:12:41 crc kubenswrapper[4886]: I0219 21:12:41.657257 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-2tm5m" podStartSLOduration=2.423571601 podStartE2EDuration="8.657231163s" podCreationTimestamp="2026-02-19 21:12:33 +0000 UTC" firstStartedPulling="2026-02-19 21:12:34.2957255 +0000 UTC m=+784.923568560" lastFinishedPulling="2026-02-19 21:12:40.529385062 +0000 UTC m=+791.157228122" observedRunningTime="2026-02-19 21:12:41.651691747 +0000 UTC m=+792.279534817" watchObservedRunningTime="2026-02-19 21:12:41.657231163 +0000 UTC m=+792.285074243" Feb 19 21:13:11 crc kubenswrapper[4886]: I0219 21:13:11.892341 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg"] Feb 19 21:13:11 crc kubenswrapper[4886]: I0219 21:13:11.895467 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg" Feb 19 21:13:11 crc kubenswrapper[4886]: I0219 21:13:11.898822 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 21:13:11 crc kubenswrapper[4886]: I0219 21:13:11.913179 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg"] Feb 19 21:13:12 crc kubenswrapper[4886]: I0219 21:13:12.019416 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b80eb6dd-9406-47cb-b200-8855fe83ef94-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg\" (UID: \"b80eb6dd-9406-47cb-b200-8855fe83ef94\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg" Feb 19 21:13:12 crc kubenswrapper[4886]: I0219 21:13:12.019488 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b80eb6dd-9406-47cb-b200-8855fe83ef94-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg\" (UID: \"b80eb6dd-9406-47cb-b200-8855fe83ef94\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg" Feb 19 21:13:12 crc kubenswrapper[4886]: I0219 21:13:12.019518 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq6tk\" (UniqueName: \"kubernetes.io/projected/b80eb6dd-9406-47cb-b200-8855fe83ef94-kube-api-access-wq6tk\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg\" (UID: \"b80eb6dd-9406-47cb-b200-8855fe83ef94\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg" Feb 19 21:13:12 crc kubenswrapper[4886]: I0219 21:13:12.120776 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b80eb6dd-9406-47cb-b200-8855fe83ef94-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg\" (UID: \"b80eb6dd-9406-47cb-b200-8855fe83ef94\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg" Feb 19 21:13:12 crc kubenswrapper[4886]: I0219 21:13:12.121095 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b80eb6dd-9406-47cb-b200-8855fe83ef94-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg\" (UID: \"b80eb6dd-9406-47cb-b200-8855fe83ef94\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg" Feb 19 21:13:12 crc kubenswrapper[4886]: I0219 21:13:12.121210 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq6tk\" (UniqueName: \"kubernetes.io/projected/b80eb6dd-9406-47cb-b200-8855fe83ef94-kube-api-access-wq6tk\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg\" (UID: \"b80eb6dd-9406-47cb-b200-8855fe83ef94\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg" Feb 19 21:13:12 crc kubenswrapper[4886]: I0219 21:13:12.121464 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b80eb6dd-9406-47cb-b200-8855fe83ef94-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg\" (UID: \"b80eb6dd-9406-47cb-b200-8855fe83ef94\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg" Feb 19 21:13:12 crc kubenswrapper[4886]: I0219 21:13:12.121817 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b80eb6dd-9406-47cb-b200-8855fe83ef94-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg\" (UID: \"b80eb6dd-9406-47cb-b200-8855fe83ef94\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg" Feb 19 21:13:12 crc kubenswrapper[4886]: I0219 21:13:12.170605 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq6tk\" (UniqueName: \"kubernetes.io/projected/b80eb6dd-9406-47cb-b200-8855fe83ef94-kube-api-access-wq6tk\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg\" (UID: \"b80eb6dd-9406-47cb-b200-8855fe83ef94\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg" Feb 19 21:13:12 crc kubenswrapper[4886]: I0219 21:13:12.212749 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg" Feb 19 21:13:12 crc kubenswrapper[4886]: I0219 21:13:12.477050 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg"] Feb 19 21:13:12 crc kubenswrapper[4886]: I0219 21:13:12.913227 4886 generic.go:334] "Generic (PLEG): container finished" podID="b80eb6dd-9406-47cb-b200-8855fe83ef94" containerID="90a56c8180ccdb1f0d6b6b4b7595a40dd4cb536329550ccf30b789a23611be24" exitCode=0 Feb 19 21:13:12 crc kubenswrapper[4886]: I0219 21:13:12.913333 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg" event={"ID":"b80eb6dd-9406-47cb-b200-8855fe83ef94","Type":"ContainerDied","Data":"90a56c8180ccdb1f0d6b6b4b7595a40dd4cb536329550ccf30b789a23611be24"} Feb 19 21:13:12 crc kubenswrapper[4886]: I0219 21:13:12.913369 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg" event={"ID":"b80eb6dd-9406-47cb-b200-8855fe83ef94","Type":"ContainerStarted","Data":"8f9d3f83c425a155a55abf97bb266ac9694b5cac68753a60547200cface1aae6"} Feb 19 21:13:14 crc kubenswrapper[4886]: I0219 21:13:14.241207 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wns9m"] Feb 19 21:13:14 crc kubenswrapper[4886]: I0219 21:13:14.243490 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wns9m" Feb 19 21:13:14 crc kubenswrapper[4886]: I0219 21:13:14.247444 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wns9m"] Feb 19 21:13:14 crc kubenswrapper[4886]: I0219 21:13:14.360411 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affc0609-88b0-492e-9ff6-33cd6bb58188-utilities\") pod \"redhat-operators-wns9m\" (UID: \"affc0609-88b0-492e-9ff6-33cd6bb58188\") " pod="openshift-marketplace/redhat-operators-wns9m" Feb 19 21:13:14 crc kubenswrapper[4886]: I0219 21:13:14.360497 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affc0609-88b0-492e-9ff6-33cd6bb58188-catalog-content\") pod \"redhat-operators-wns9m\" (UID: \"affc0609-88b0-492e-9ff6-33cd6bb58188\") " pod="openshift-marketplace/redhat-operators-wns9m" Feb 19 21:13:14 crc kubenswrapper[4886]: I0219 21:13:14.360569 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22vhh\" (UniqueName: \"kubernetes.io/projected/affc0609-88b0-492e-9ff6-33cd6bb58188-kube-api-access-22vhh\") pod \"redhat-operators-wns9m\" (UID: \"affc0609-88b0-492e-9ff6-33cd6bb58188\") " pod="openshift-marketplace/redhat-operators-wns9m" Feb 19 21:13:14 crc kubenswrapper[4886]: I0219 21:13:14.462061 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affc0609-88b0-492e-9ff6-33cd6bb58188-utilities\") pod \"redhat-operators-wns9m\" (UID: \"affc0609-88b0-492e-9ff6-33cd6bb58188\") " pod="openshift-marketplace/redhat-operators-wns9m" Feb 19 21:13:14 crc kubenswrapper[4886]: I0219 21:13:14.462154 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affc0609-88b0-492e-9ff6-33cd6bb58188-catalog-content\") pod \"redhat-operators-wns9m\" (UID: \"affc0609-88b0-492e-9ff6-33cd6bb58188\") " pod="openshift-marketplace/redhat-operators-wns9m" Feb 19 21:13:14 crc kubenswrapper[4886]: I0219 21:13:14.462201 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22vhh\" (UniqueName: \"kubernetes.io/projected/affc0609-88b0-492e-9ff6-33cd6bb58188-kube-api-access-22vhh\") pod \"redhat-operators-wns9m\" (UID: \"affc0609-88b0-492e-9ff6-33cd6bb58188\") " pod="openshift-marketplace/redhat-operators-wns9m" Feb 19 21:13:14 crc kubenswrapper[4886]: I0219 21:13:14.462664 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affc0609-88b0-492e-9ff6-33cd6bb58188-utilities\") pod \"redhat-operators-wns9m\" (UID: \"affc0609-88b0-492e-9ff6-33cd6bb58188\") " pod="openshift-marketplace/redhat-operators-wns9m" Feb 19 21:13:14 crc kubenswrapper[4886]: I0219 21:13:14.462779 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affc0609-88b0-492e-9ff6-33cd6bb58188-catalog-content\") pod \"redhat-operators-wns9m\" (UID: \"affc0609-88b0-492e-9ff6-33cd6bb58188\") " pod="openshift-marketplace/redhat-operators-wns9m" Feb 19 21:13:14 crc kubenswrapper[4886]: I0219 21:13:14.486393 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22vhh\" (UniqueName: \"kubernetes.io/projected/affc0609-88b0-492e-9ff6-33cd6bb58188-kube-api-access-22vhh\") pod \"redhat-operators-wns9m\" (UID: \"affc0609-88b0-492e-9ff6-33cd6bb58188\") " pod="openshift-marketplace/redhat-operators-wns9m" Feb 19 21:13:14 crc kubenswrapper[4886]: I0219 21:13:14.578799 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wns9m" Feb 19 21:13:14 crc kubenswrapper[4886]: I0219 21:13:14.844455 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wns9m"] Feb 19 21:13:14 crc kubenswrapper[4886]: W0219 21:13:14.854413 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaffc0609_88b0_492e_9ff6_33cd6bb58188.slice/crio-3546a1018ca2700be4961d3341e8b5bd0fe60584d93517e4d79390e8d3c3c498 WatchSource:0}: Error finding container 3546a1018ca2700be4961d3341e8b5bd0fe60584d93517e4d79390e8d3c3c498: Status 404 returned error can't find the container with id 3546a1018ca2700be4961d3341e8b5bd0fe60584d93517e4d79390e8d3c3c498 Feb 19 21:13:14 crc kubenswrapper[4886]: I0219 21:13:14.928620 4886 generic.go:334] "Generic (PLEG): container finished" podID="b80eb6dd-9406-47cb-b200-8855fe83ef94" containerID="315c887b4984259413da8bb01439df150c7de48044362b162e7142c323670a09" exitCode=0 Feb 19 21:13:14 crc kubenswrapper[4886]: I0219 21:13:14.928722 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg" event={"ID":"b80eb6dd-9406-47cb-b200-8855fe83ef94","Type":"ContainerDied","Data":"315c887b4984259413da8bb01439df150c7de48044362b162e7142c323670a09"} Feb 19 21:13:14 crc kubenswrapper[4886]: I0219 21:13:14.930745 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wns9m" event={"ID":"affc0609-88b0-492e-9ff6-33cd6bb58188","Type":"ContainerStarted","Data":"3546a1018ca2700be4961d3341e8b5bd0fe60584d93517e4d79390e8d3c3c498"} Feb 19 21:13:15 crc kubenswrapper[4886]: I0219 21:13:15.956701 4886 generic.go:334] "Generic (PLEG): container finished" podID="b80eb6dd-9406-47cb-b200-8855fe83ef94" containerID="507306355f0ddc5bb5e60a331738e1e4d316e50d3ab6035e939ed8db6d8072a5" exitCode=0 Feb 19 21:13:15 crc kubenswrapper[4886]: I0219 21:13:15.957282 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg" event={"ID":"b80eb6dd-9406-47cb-b200-8855fe83ef94","Type":"ContainerDied","Data":"507306355f0ddc5bb5e60a331738e1e4d316e50d3ab6035e939ed8db6d8072a5"} Feb 19 21:13:15 crc kubenswrapper[4886]: I0219 21:13:15.959479 4886 generic.go:334] "Generic (PLEG): container finished" podID="affc0609-88b0-492e-9ff6-33cd6bb58188" containerID="fbb76513a1ea73b7aff996cd953b99b61fbde64fddb87ac7b3b3fe0565cacd19" exitCode=0 Feb 19 21:13:15 crc kubenswrapper[4886]: I0219 21:13:15.959533 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wns9m" event={"ID":"affc0609-88b0-492e-9ff6-33cd6bb58188","Type":"ContainerDied","Data":"fbb76513a1ea73b7aff996cd953b99b61fbde64fddb87ac7b3b3fe0565cacd19"} Feb 19 21:13:17 crc kubenswrapper[4886]: I0219 21:13:17.328523 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg" Feb 19 21:13:17 crc kubenswrapper[4886]: I0219 21:13:17.412839 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b80eb6dd-9406-47cb-b200-8855fe83ef94-util\") pod \"b80eb6dd-9406-47cb-b200-8855fe83ef94\" (UID: \"b80eb6dd-9406-47cb-b200-8855fe83ef94\") " Feb 19 21:13:17 crc kubenswrapper[4886]: I0219 21:13:17.413067 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b80eb6dd-9406-47cb-b200-8855fe83ef94-bundle\") pod \"b80eb6dd-9406-47cb-b200-8855fe83ef94\" (UID: \"b80eb6dd-9406-47cb-b200-8855fe83ef94\") " Feb 19 21:13:17 crc kubenswrapper[4886]: I0219 21:13:17.413127 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq6tk\" (UniqueName: \"kubernetes.io/projected/b80eb6dd-9406-47cb-b200-8855fe83ef94-kube-api-access-wq6tk\") pod \"b80eb6dd-9406-47cb-b200-8855fe83ef94\" (UID: \"b80eb6dd-9406-47cb-b200-8855fe83ef94\") " Feb 19 21:13:17 crc kubenswrapper[4886]: I0219 21:13:17.413577 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b80eb6dd-9406-47cb-b200-8855fe83ef94-bundle" (OuterVolumeSpecName: "bundle") pod "b80eb6dd-9406-47cb-b200-8855fe83ef94" (UID: "b80eb6dd-9406-47cb-b200-8855fe83ef94"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:13:17 crc kubenswrapper[4886]: I0219 21:13:17.431450 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b80eb6dd-9406-47cb-b200-8855fe83ef94-kube-api-access-wq6tk" (OuterVolumeSpecName: "kube-api-access-wq6tk") pod "b80eb6dd-9406-47cb-b200-8855fe83ef94" (UID: "b80eb6dd-9406-47cb-b200-8855fe83ef94"). InnerVolumeSpecName "kube-api-access-wq6tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:13:17 crc kubenswrapper[4886]: I0219 21:13:17.436866 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b80eb6dd-9406-47cb-b200-8855fe83ef94-util" (OuterVolumeSpecName: "util") pod "b80eb6dd-9406-47cb-b200-8855fe83ef94" (UID: "b80eb6dd-9406-47cb-b200-8855fe83ef94"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:13:17 crc kubenswrapper[4886]: I0219 21:13:17.514619 4886 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b80eb6dd-9406-47cb-b200-8855fe83ef94-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:13:17 crc kubenswrapper[4886]: I0219 21:13:17.514660 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wq6tk\" (UniqueName: \"kubernetes.io/projected/b80eb6dd-9406-47cb-b200-8855fe83ef94-kube-api-access-wq6tk\") on node \"crc\" DevicePath \"\"" Feb 19 21:13:17 crc kubenswrapper[4886]: I0219 21:13:17.514675 4886 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b80eb6dd-9406-47cb-b200-8855fe83ef94-util\") on node \"crc\" DevicePath \"\"" Feb 19 21:13:17 crc kubenswrapper[4886]: I0219 21:13:17.975757 4886 generic.go:334] "Generic (PLEG): container finished" podID="affc0609-88b0-492e-9ff6-33cd6bb58188" containerID="fcdc515ebf456014cb6a1ea85195a53ff24727678fad24d82d74e85ce6df83d8" exitCode=0 Feb 19 21:13:17 crc kubenswrapper[4886]: I0219 21:13:17.975827 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wns9m" event={"ID":"affc0609-88b0-492e-9ff6-33cd6bb58188","Type":"ContainerDied","Data":"fcdc515ebf456014cb6a1ea85195a53ff24727678fad24d82d74e85ce6df83d8"} Feb 19 21:13:17 crc kubenswrapper[4886]: I0219 21:13:17.979704 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg" event={"ID":"b80eb6dd-9406-47cb-b200-8855fe83ef94","Type":"ContainerDied","Data":"8f9d3f83c425a155a55abf97bb266ac9694b5cac68753a60547200cface1aae6"} Feb 19 21:13:17 crc kubenswrapper[4886]: I0219 21:13:17.979761 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f9d3f83c425a155a55abf97bb266ac9694b5cac68753a60547200cface1aae6" Feb 19 21:13:17 crc kubenswrapper[4886]: I0219 21:13:17.979764 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca7hcvg" Feb 19 21:13:18 crc kubenswrapper[4886]: I0219 21:13:18.987031 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wns9m" event={"ID":"affc0609-88b0-492e-9ff6-33cd6bb58188","Type":"ContainerStarted","Data":"1d1c41b29bce73b0457e1afd0f0784ac5f95e0b234898dff905b68feccc0fa27"} Feb 19 21:13:19 crc kubenswrapper[4886]: I0219 21:13:19.012678 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wns9m" podStartSLOduration=2.610131367 podStartE2EDuration="5.012657235s" podCreationTimestamp="2026-02-19 21:13:14 +0000 UTC" firstStartedPulling="2026-02-19 21:13:15.962216591 +0000 UTC m=+826.590059671" lastFinishedPulling="2026-02-19 21:13:18.364742489 +0000 UTC m=+828.992585539" observedRunningTime="2026-02-19 21:13:19.009752974 +0000 UTC m=+829.637596034" watchObservedRunningTime="2026-02-19 21:13:19.012657235 +0000 UTC m=+829.640500305" Feb 19 21:13:21 crc kubenswrapper[4886]: I0219 21:13:21.396640 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-v4789"] Feb 19 21:13:21 crc kubenswrapper[4886]: E0219 21:13:21.397377 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b80eb6dd-9406-47cb-b200-8855fe83ef94" containerName="util" Feb 19 21:13:21 crc kubenswrapper[4886]: I0219 21:13:21.397399 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b80eb6dd-9406-47cb-b200-8855fe83ef94" containerName="util" Feb 19 21:13:21 crc kubenswrapper[4886]: E0219 21:13:21.397426 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b80eb6dd-9406-47cb-b200-8855fe83ef94" containerName="extract" Feb 19 21:13:21 crc kubenswrapper[4886]: I0219 21:13:21.397439 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b80eb6dd-9406-47cb-b200-8855fe83ef94" containerName="extract" Feb 19 21:13:21 crc kubenswrapper[4886]: E0219 21:13:21.397467 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b80eb6dd-9406-47cb-b200-8855fe83ef94" containerName="pull" Feb 19 21:13:21 crc kubenswrapper[4886]: I0219 21:13:21.397479 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b80eb6dd-9406-47cb-b200-8855fe83ef94" containerName="pull" Feb 19 21:13:21 crc kubenswrapper[4886]: I0219 21:13:21.397672 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="b80eb6dd-9406-47cb-b200-8855fe83ef94" containerName="extract" Feb 19 21:13:21 crc kubenswrapper[4886]: I0219 21:13:21.398368 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-v4789" Feb 19 21:13:21 crc kubenswrapper[4886]: I0219 21:13:21.400371 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-2ntrn" Feb 19 21:13:21 crc kubenswrapper[4886]: I0219 21:13:21.400896 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 19 21:13:21 crc kubenswrapper[4886]: I0219 21:13:21.401777 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 19 21:13:21 crc kubenswrapper[4886]: I0219 21:13:21.406282 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-v4789"] Feb 19 21:13:21 crc kubenswrapper[4886]: I0219 21:13:21.469486 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdhn9\" (UniqueName: \"kubernetes.io/projected/6001a4d8-98fd-45aa-b473-a50a938ae9c7-kube-api-access-gdhn9\") pod \"nmstate-operator-694c9596b7-v4789\" (UID: \"6001a4d8-98fd-45aa-b473-a50a938ae9c7\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-v4789" Feb 19 21:13:21 crc kubenswrapper[4886]: I0219 21:13:21.571657 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdhn9\" (UniqueName: \"kubernetes.io/projected/6001a4d8-98fd-45aa-b473-a50a938ae9c7-kube-api-access-gdhn9\") pod \"nmstate-operator-694c9596b7-v4789\" (UID: \"6001a4d8-98fd-45aa-b473-a50a938ae9c7\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-v4789" Feb 19 21:13:21 crc kubenswrapper[4886]: I0219 21:13:21.607135 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdhn9\" (UniqueName: \"kubernetes.io/projected/6001a4d8-98fd-45aa-b473-a50a938ae9c7-kube-api-access-gdhn9\") pod \"nmstate-operator-694c9596b7-v4789\" (UID: \"6001a4d8-98fd-45aa-b473-a50a938ae9c7\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-v4789" Feb 19 21:13:21 crc kubenswrapper[4886]: I0219 21:13:21.716238 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-v4789" Feb 19 21:13:22 crc kubenswrapper[4886]: I0219 21:13:22.240468 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-v4789"] Feb 19 21:13:23 crc kubenswrapper[4886]: I0219 21:13:23.024541 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-v4789" event={"ID":"6001a4d8-98fd-45aa-b473-a50a938ae9c7","Type":"ContainerStarted","Data":"64dbe689e8fac84c38209a72f41ee007f39f0bee01b4deab59b08bdec48f90c0"} Feb 19 21:13:24 crc kubenswrapper[4886]: I0219 21:13:24.579960 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wns9m" Feb 19 21:13:24 crc kubenswrapper[4886]: I0219 21:13:24.580132 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wns9m" Feb 19 21:13:25 crc kubenswrapper[4886]: I0219 21:13:25.043777 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-v4789" event={"ID":"6001a4d8-98fd-45aa-b473-a50a938ae9c7","Type":"ContainerStarted","Data":"f501c3b9b3d6e5bde1ee57be55b31758c8f070cc985f69f9515438dac632e1c2"} Feb 19 21:13:25 crc kubenswrapper[4886]: I0219 21:13:25.072752 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-v4789" podStartSLOduration=1.607908663 podStartE2EDuration="4.072725497s" podCreationTimestamp="2026-02-19 21:13:21 +0000 UTC" firstStartedPulling="2026-02-19 21:13:22.239573682 +0000 UTC m=+832.867416732" lastFinishedPulling="2026-02-19 21:13:24.704390516 +0000 UTC m=+835.332233566" observedRunningTime="2026-02-19 21:13:25.067890249 +0000 UTC m=+835.695733339" watchObservedRunningTime="2026-02-19 21:13:25.072725497 +0000 UTC m=+835.700568587" Feb 19 21:13:25 crc kubenswrapper[4886]: I0219 21:13:25.623065 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wns9m" podUID="affc0609-88b0-492e-9ff6-33cd6bb58188" containerName="registry-server" probeResult="failure" output=< Feb 19 21:13:25 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 21:13:25 crc kubenswrapper[4886]: > Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.106429 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-kd2q6"] Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.107931 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-kd2q6" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.110061 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-vr9tn" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.113861 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-6zf9p"] Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.115066 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6zf9p" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.116485 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.122370 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-kd2q6"] Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.146932 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-6zf9p"] Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.159359 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s49wh\" (UniqueName: \"kubernetes.io/projected/bd3b67b3-9bd9-4b93-bc64-57be5e285a4f-kube-api-access-s49wh\") pod \"nmstate-webhook-866bcb46dc-6zf9p\" (UID: \"bd3b67b3-9bd9-4b93-bc64-57be5e285a4f\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6zf9p" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.159436 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zckpf\" (UniqueName: \"kubernetes.io/projected/dd893343-b95f-4900-beb8-30e55011cf84-kube-api-access-zckpf\") pod \"nmstate-metrics-58c85c668d-kd2q6\" (UID: \"dd893343-b95f-4900-beb8-30e55011cf84\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-kd2q6" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.159463 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bd3b67b3-9bd9-4b93-bc64-57be5e285a4f-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-6zf9p\" (UID: \"bd3b67b3-9bd9-4b93-bc64-57be5e285a4f\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6zf9p" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.167170 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-7s5zm"] Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.168050 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7s5zm" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.259720 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j7cml"] Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.260604 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl6f2\" (UniqueName: \"kubernetes.io/projected/6ac61251-1025-4d57-a07c-3ce70f96f3e3-kube-api-access-tl6f2\") pod \"nmstate-handler-7s5zm\" (UID: \"6ac61251-1025-4d57-a07c-3ce70f96f3e3\") " pod="openshift-nmstate/nmstate-handler-7s5zm" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.260663 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zckpf\" (UniqueName: \"kubernetes.io/projected/dd893343-b95f-4900-beb8-30e55011cf84-kube-api-access-zckpf\") pod \"nmstate-metrics-58c85c668d-kd2q6\" (UID: \"dd893343-b95f-4900-beb8-30e55011cf84\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-kd2q6" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.260716 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bd3b67b3-9bd9-4b93-bc64-57be5e285a4f-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-6zf9p\" (UID: \"bd3b67b3-9bd9-4b93-bc64-57be5e285a4f\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6zf9p" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.260770 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6ac61251-1025-4d57-a07c-3ce70f96f3e3-nmstate-lock\") pod \"nmstate-handler-7s5zm\" (UID: \"6ac61251-1025-4d57-a07c-3ce70f96f3e3\") " pod="openshift-nmstate/nmstate-handler-7s5zm" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.260787 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6ac61251-1025-4d57-a07c-3ce70f96f3e3-ovs-socket\") pod \"nmstate-handler-7s5zm\" (UID: \"6ac61251-1025-4d57-a07c-3ce70f96f3e3\") " pod="openshift-nmstate/nmstate-handler-7s5zm" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.260832 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6ac61251-1025-4d57-a07c-3ce70f96f3e3-dbus-socket\") pod \"nmstate-handler-7s5zm\" (UID: \"6ac61251-1025-4d57-a07c-3ce70f96f3e3\") " pod="openshift-nmstate/nmstate-handler-7s5zm" Feb 19 21:13:26 crc kubenswrapper[4886]: E0219 21:13:26.260946 4886 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 19 21:13:26 crc kubenswrapper[4886]: E0219 21:13:26.261040 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd3b67b3-9bd9-4b93-bc64-57be5e285a4f-tls-key-pair podName:bd3b67b3-9bd9-4b93-bc64-57be5e285a4f nodeName:}" failed. No retries permitted until 2026-02-19 21:13:26.761015387 +0000 UTC m=+837.388858437 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/bd3b67b3-9bd9-4b93-bc64-57be5e285a4f-tls-key-pair") pod "nmstate-webhook-866bcb46dc-6zf9p" (UID: "bd3b67b3-9bd9-4b93-bc64-57be5e285a4f") : secret "openshift-nmstate-webhook" not found Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.261100 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s49wh\" (UniqueName: \"kubernetes.io/projected/bd3b67b3-9bd9-4b93-bc64-57be5e285a4f-kube-api-access-s49wh\") pod \"nmstate-webhook-866bcb46dc-6zf9p\" (UID: \"bd3b67b3-9bd9-4b93-bc64-57be5e285a4f\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6zf9p" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.261496 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j7cml" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.266066 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.266138 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-f4z4d" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.268449 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.275169 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j7cml"] Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.291804 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s49wh\" (UniqueName: \"kubernetes.io/projected/bd3b67b3-9bd9-4b93-bc64-57be5e285a4f-kube-api-access-s49wh\") pod \"nmstate-webhook-866bcb46dc-6zf9p\" (UID: \"bd3b67b3-9bd9-4b93-bc64-57be5e285a4f\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6zf9p" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.292907 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zckpf\" (UniqueName: \"kubernetes.io/projected/dd893343-b95f-4900-beb8-30e55011cf84-kube-api-access-zckpf\") pod \"nmstate-metrics-58c85c668d-kd2q6\" (UID: \"dd893343-b95f-4900-beb8-30e55011cf84\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-kd2q6" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.362369 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl6f2\" (UniqueName: \"kubernetes.io/projected/6ac61251-1025-4d57-a07c-3ce70f96f3e3-kube-api-access-tl6f2\") pod \"nmstate-handler-7s5zm\" (UID: \"6ac61251-1025-4d57-a07c-3ce70f96f3e3\") " pod="openshift-nmstate/nmstate-handler-7s5zm" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.362864 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/288b3d58-e699-4f16-bd72-e198b8e53d7c-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-j7cml\" (UID: \"288b3d58-e699-4f16-bd72-e198b8e53d7c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j7cml" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.362908 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/288b3d58-e699-4f16-bd72-e198b8e53d7c-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-j7cml\" (UID: \"288b3d58-e699-4f16-bd72-e198b8e53d7c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j7cml" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.362934 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6ac61251-1025-4d57-a07c-3ce70f96f3e3-nmstate-lock\") pod \"nmstate-handler-7s5zm\" (UID: \"6ac61251-1025-4d57-a07c-3ce70f96f3e3\") " pod="openshift-nmstate/nmstate-handler-7s5zm" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.362950 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6ac61251-1025-4d57-a07c-3ce70f96f3e3-ovs-socket\") pod \"nmstate-handler-7s5zm\" (UID: \"6ac61251-1025-4d57-a07c-3ce70f96f3e3\") " pod="openshift-nmstate/nmstate-handler-7s5zm" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.363036 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6ac61251-1025-4d57-a07c-3ce70f96f3e3-nmstate-lock\") pod \"nmstate-handler-7s5zm\" (UID: \"6ac61251-1025-4d57-a07c-3ce70f96f3e3\") " pod="openshift-nmstate/nmstate-handler-7s5zm" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.363036 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6ac61251-1025-4d57-a07c-3ce70f96f3e3-ovs-socket\") pod \"nmstate-handler-7s5zm\" (UID: \"6ac61251-1025-4d57-a07c-3ce70f96f3e3\") " pod="openshift-nmstate/nmstate-handler-7s5zm" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.363121 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6ac61251-1025-4d57-a07c-3ce70f96f3e3-dbus-socket\") pod \"nmstate-handler-7s5zm\" (UID: \"6ac61251-1025-4d57-a07c-3ce70f96f3e3\") " pod="openshift-nmstate/nmstate-handler-7s5zm" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.363155 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kdg5\" (UniqueName: \"kubernetes.io/projected/288b3d58-e699-4f16-bd72-e198b8e53d7c-kube-api-access-9kdg5\") pod \"nmstate-console-plugin-5c78fc5d65-j7cml\" (UID: \"288b3d58-e699-4f16-bd72-e198b8e53d7c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j7cml" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.363449 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6ac61251-1025-4d57-a07c-3ce70f96f3e3-dbus-socket\") pod \"nmstate-handler-7s5zm\" (UID: \"6ac61251-1025-4d57-a07c-3ce70f96f3e3\") " pod="openshift-nmstate/nmstate-handler-7s5zm" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.382614 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl6f2\" (UniqueName: \"kubernetes.io/projected/6ac61251-1025-4d57-a07c-3ce70f96f3e3-kube-api-access-tl6f2\") pod \"nmstate-handler-7s5zm\" (UID: \"6ac61251-1025-4d57-a07c-3ce70f96f3e3\") " pod="openshift-nmstate/nmstate-handler-7s5zm" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.429095 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-kd2q6" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.464441 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kdg5\" (UniqueName: \"kubernetes.io/projected/288b3d58-e699-4f16-bd72-e198b8e53d7c-kube-api-access-9kdg5\") pod \"nmstate-console-plugin-5c78fc5d65-j7cml\" (UID: \"288b3d58-e699-4f16-bd72-e198b8e53d7c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j7cml" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.464602 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/288b3d58-e699-4f16-bd72-e198b8e53d7c-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-j7cml\" (UID: \"288b3d58-e699-4f16-bd72-e198b8e53d7c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j7cml" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.464676 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/288b3d58-e699-4f16-bd72-e198b8e53d7c-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-j7cml\" (UID: \"288b3d58-e699-4f16-bd72-e198b8e53d7c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j7cml" Feb 19 21:13:26 crc kubenswrapper[4886]: E0219 21:13:26.464780 4886 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 19 21:13:26 crc kubenswrapper[4886]: E0219 21:13:26.464856 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/288b3d58-e699-4f16-bd72-e198b8e53d7c-plugin-serving-cert podName:288b3d58-e699-4f16-bd72-e198b8e53d7c nodeName:}" failed. No retries permitted until 2026-02-19 21:13:26.964837549 +0000 UTC m=+837.592680599 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/288b3d58-e699-4f16-bd72-e198b8e53d7c-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-j7cml" (UID: "288b3d58-e699-4f16-bd72-e198b8e53d7c") : secret "plugin-serving-cert" not found Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.466046 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/288b3d58-e699-4f16-bd72-e198b8e53d7c-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-j7cml\" (UID: \"288b3d58-e699-4f16-bd72-e198b8e53d7c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j7cml" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.491847 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kdg5\" (UniqueName: \"kubernetes.io/projected/288b3d58-e699-4f16-bd72-e198b8e53d7c-kube-api-access-9kdg5\") pod \"nmstate-console-plugin-5c78fc5d65-j7cml\" (UID: \"288b3d58-e699-4f16-bd72-e198b8e53d7c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j7cml" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.500313 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7s5zm" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.504279 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-79dd89cd97-kwpj9"] Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.505205 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.570557 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/622977a2-44e6-4860-9f42-45619a022ed3-console-serving-cert\") pod \"console-79dd89cd97-kwpj9\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.570852 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/622977a2-44e6-4860-9f42-45619a022ed3-console-oauth-config\") pod \"console-79dd89cd97-kwpj9\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.571080 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/622977a2-44e6-4860-9f42-45619a022ed3-service-ca\") pod \"console-79dd89cd97-kwpj9\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.571427 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/622977a2-44e6-4860-9f42-45619a022ed3-console-config\") pod \"console-79dd89cd97-kwpj9\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.571772 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-79dd89cd97-kwpj9"] Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.571808 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/622977a2-44e6-4860-9f42-45619a022ed3-oauth-serving-cert\") pod \"console-79dd89cd97-kwpj9\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.572089 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/622977a2-44e6-4860-9f42-45619a022ed3-trusted-ca-bundle\") pod \"console-79dd89cd97-kwpj9\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.572176 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw9g9\" (UniqueName: \"kubernetes.io/projected/622977a2-44e6-4860-9f42-45619a022ed3-kube-api-access-mw9g9\") pod \"console-79dd89cd97-kwpj9\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.673484 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/622977a2-44e6-4860-9f42-45619a022ed3-console-serving-cert\") pod \"console-79dd89cd97-kwpj9\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.673923 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/622977a2-44e6-4860-9f42-45619a022ed3-console-oauth-config\") pod \"console-79dd89cd97-kwpj9\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.673954 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/622977a2-44e6-4860-9f42-45619a022ed3-service-ca\") pod \"console-79dd89cd97-kwpj9\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.673985 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/622977a2-44e6-4860-9f42-45619a022ed3-console-config\") pod \"console-79dd89cd97-kwpj9\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.674039 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/622977a2-44e6-4860-9f42-45619a022ed3-oauth-serving-cert\") pod \"console-79dd89cd97-kwpj9\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.674102 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/622977a2-44e6-4860-9f42-45619a022ed3-trusted-ca-bundle\") pod \"console-79dd89cd97-kwpj9\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.674124 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw9g9\" (UniqueName: \"kubernetes.io/projected/622977a2-44e6-4860-9f42-45619a022ed3-kube-api-access-mw9g9\") pod \"console-79dd89cd97-kwpj9\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.674817 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/622977a2-44e6-4860-9f42-45619a022ed3-service-ca\") pod \"console-79dd89cd97-kwpj9\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.675169 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/622977a2-44e6-4860-9f42-45619a022ed3-oauth-serving-cert\") pod \"console-79dd89cd97-kwpj9\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.676635 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/622977a2-44e6-4860-9f42-45619a022ed3-console-config\") pod \"console-79dd89cd97-kwpj9\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.678318 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/622977a2-44e6-4860-9f42-45619a022ed3-trusted-ca-bundle\") pod \"console-79dd89cd97-kwpj9\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.682385 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/622977a2-44e6-4860-9f42-45619a022ed3-console-oauth-config\") pod \"console-79dd89cd97-kwpj9\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.682782 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/622977a2-44e6-4860-9f42-45619a022ed3-console-serving-cert\") pod \"console-79dd89cd97-kwpj9\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.691347 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw9g9\" (UniqueName: \"kubernetes.io/projected/622977a2-44e6-4860-9f42-45619a022ed3-kube-api-access-mw9g9\") pod \"console-79dd89cd97-kwpj9\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.775892 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bd3b67b3-9bd9-4b93-bc64-57be5e285a4f-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-6zf9p\" (UID: \"bd3b67b3-9bd9-4b93-bc64-57be5e285a4f\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6zf9p" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.779404 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bd3b67b3-9bd9-4b93-bc64-57be5e285a4f-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-6zf9p\" (UID: \"bd3b67b3-9bd9-4b93-bc64-57be5e285a4f\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6zf9p" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.875320 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.933971 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-kd2q6"] Feb 19 21:13:26 crc kubenswrapper[4886]: W0219 21:13:26.950808 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd893343_b95f_4900_beb8_30e55011cf84.slice/crio-4016a9a87db4a3d98d86ed59817496e7a54c4458996830714ac2102f7476a3f2 WatchSource:0}: Error finding container 4016a9a87db4a3d98d86ed59817496e7a54c4458996830714ac2102f7476a3f2: Status 404 returned error can't find the container with id 4016a9a87db4a3d98d86ed59817496e7a54c4458996830714ac2102f7476a3f2 Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.984255 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/288b3d58-e699-4f16-bd72-e198b8e53d7c-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-j7cml\" (UID: \"288b3d58-e699-4f16-bd72-e198b8e53d7c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j7cml" Feb 19 21:13:26 crc kubenswrapper[4886]: I0219 21:13:26.987576 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/288b3d58-e699-4f16-bd72-e198b8e53d7c-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-j7cml\" (UID: \"288b3d58-e699-4f16-bd72-e198b8e53d7c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j7cml" Feb 19 21:13:27 crc kubenswrapper[4886]: I0219 21:13:27.037792 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6zf9p" Feb 19 21:13:27 crc kubenswrapper[4886]: I0219 21:13:27.058446 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7s5zm" event={"ID":"6ac61251-1025-4d57-a07c-3ce70f96f3e3","Type":"ContainerStarted","Data":"693d21d69f077ae1b3f7fe96cfacf7a66438d94a30ea84312a2ba4fa8b8098cf"} Feb 19 21:13:27 crc kubenswrapper[4886]: I0219 21:13:27.059993 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-kd2q6" event={"ID":"dd893343-b95f-4900-beb8-30e55011cf84","Type":"ContainerStarted","Data":"4016a9a87db4a3d98d86ed59817496e7a54c4458996830714ac2102f7476a3f2"} Feb 19 21:13:27 crc kubenswrapper[4886]: I0219 21:13:27.103997 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-79dd89cd97-kwpj9"] Feb 19 21:13:27 crc kubenswrapper[4886]: W0219 21:13:27.111683 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod622977a2_44e6_4860_9f42_45619a022ed3.slice/crio-cdc815f6d4331f525e96df28fbcabfc27ea4ea2cb5e8036aed33132bf90c85c6 WatchSource:0}: Error finding container cdc815f6d4331f525e96df28fbcabfc27ea4ea2cb5e8036aed33132bf90c85c6: Status 404 returned error can't find the container with id cdc815f6d4331f525e96df28fbcabfc27ea4ea2cb5e8036aed33132bf90c85c6 Feb 19 21:13:27 crc kubenswrapper[4886]: I0219 21:13:27.175623 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j7cml" Feb 19 21:13:27 crc kubenswrapper[4886]: I0219 21:13:27.332089 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-6zf9p"] Feb 19 21:13:27 crc kubenswrapper[4886]: I0219 21:13:27.682122 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j7cml"] Feb 19 21:13:28 crc kubenswrapper[4886]: I0219 21:13:28.069013 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6zf9p" event={"ID":"bd3b67b3-9bd9-4b93-bc64-57be5e285a4f","Type":"ContainerStarted","Data":"6f06c1bd135b189323d8eaef857fb8fe84f71b99a223a9bb76c953d7be7f524f"} Feb 19 21:13:28 crc kubenswrapper[4886]: I0219 21:13:28.070972 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79dd89cd97-kwpj9" event={"ID":"622977a2-44e6-4860-9f42-45619a022ed3","Type":"ContainerStarted","Data":"bfa0f3e0d485a92a46084ed6c959f73fe9a82d5f278ad39a12b4d2f49ca35a2a"} Feb 19 21:13:28 crc kubenswrapper[4886]: I0219 21:13:28.071060 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79dd89cd97-kwpj9" event={"ID":"622977a2-44e6-4860-9f42-45619a022ed3","Type":"ContainerStarted","Data":"cdc815f6d4331f525e96df28fbcabfc27ea4ea2cb5e8036aed33132bf90c85c6"} Feb 19 21:13:28 crc kubenswrapper[4886]: I0219 21:13:28.072850 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j7cml" event={"ID":"288b3d58-e699-4f16-bd72-e198b8e53d7c","Type":"ContainerStarted","Data":"02258b2700970365613ab5ddfc54166a0d3ccd4da4118b5633df1e6416929d98"} Feb 19 21:13:30 crc kubenswrapper[4886]: I0219 21:13:30.629390 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-79dd89cd97-kwpj9" podStartSLOduration=4.629373703 podStartE2EDuration="4.629373703s" podCreationTimestamp="2026-02-19 21:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:13:28.093775942 +0000 UTC m=+838.721618992" watchObservedRunningTime="2026-02-19 21:13:30.629373703 +0000 UTC m=+841.257216753" Feb 19 21:13:31 crc kubenswrapper[4886]: I0219 21:13:31.116843 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6zf9p" event={"ID":"bd3b67b3-9bd9-4b93-bc64-57be5e285a4f","Type":"ContainerStarted","Data":"6d82e4c74bba351ff38ab7659353a9dd1c4768850f4d52236bb23446acbbdee6"} Feb 19 21:13:31 crc kubenswrapper[4886]: I0219 21:13:31.117147 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6zf9p" Feb 19 21:13:31 crc kubenswrapper[4886]: I0219 21:13:31.118638 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7s5zm" event={"ID":"6ac61251-1025-4d57-a07c-3ce70f96f3e3","Type":"ContainerStarted","Data":"3ba3d9527b265199535a0661690161708c36392d65ca92a342e199a9899aa8a6"} Feb 19 21:13:31 crc kubenswrapper[4886]: I0219 21:13:31.119126 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-7s5zm" Feb 19 21:13:31 crc kubenswrapper[4886]: I0219 21:13:31.123697 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-kd2q6" event={"ID":"dd893343-b95f-4900-beb8-30e55011cf84","Type":"ContainerStarted","Data":"8e94cabee1bf00e6087643d086ce606e574f884ba876ad1c65098202225fd0d6"} Feb 19 21:13:31 crc kubenswrapper[4886]: I0219 21:13:31.144929 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6zf9p" podStartSLOduration=2.459291896 podStartE2EDuration="5.144895464s" podCreationTimestamp="2026-02-19 21:13:26 +0000 UTC" firstStartedPulling="2026-02-19 21:13:27.343420186 +0000 UTC m=+837.971263236" lastFinishedPulling="2026-02-19 21:13:30.029023754 +0000 UTC m=+840.656866804" observedRunningTime="2026-02-19 21:13:31.131250365 +0000 UTC m=+841.759093425" watchObservedRunningTime="2026-02-19 21:13:31.144895464 +0000 UTC m=+841.772738514" Feb 19 21:13:31 crc kubenswrapper[4886]: I0219 21:13:31.190801 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-7s5zm" podStartSLOduration=1.7418495040000002 podStartE2EDuration="5.190779834s" podCreationTimestamp="2026-02-19 21:13:26 +0000 UTC" firstStartedPulling="2026-02-19 21:13:26.577193281 +0000 UTC m=+837.205036321" lastFinishedPulling="2026-02-19 21:13:30.026123591 +0000 UTC m=+840.653966651" observedRunningTime="2026-02-19 21:13:31.186280822 +0000 UTC m=+841.814123872" watchObservedRunningTime="2026-02-19 21:13:31.190779834 +0000 UTC m=+841.818622884" Feb 19 21:13:33 crc kubenswrapper[4886]: I0219 21:13:33.157074 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j7cml" event={"ID":"288b3d58-e699-4f16-bd72-e198b8e53d7c","Type":"ContainerStarted","Data":"1296fe3f74145bc8f86fb586d6ec1748fbcd3901c5c0f4ee6c688280b7731e5f"} Feb 19 21:13:33 crc kubenswrapper[4886]: I0219 21:13:33.171727 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j7cml" podStartSLOduration=2.799177499 podStartE2EDuration="7.171712221s" podCreationTimestamp="2026-02-19 21:13:26 +0000 UTC" firstStartedPulling="2026-02-19 21:13:27.685882423 +0000 UTC m=+838.313725493" lastFinishedPulling="2026-02-19 21:13:32.058417125 +0000 UTC m=+842.686260215" observedRunningTime="2026-02-19 21:13:33.171037454 +0000 UTC m=+843.798880514" watchObservedRunningTime="2026-02-19 21:13:33.171712221 +0000 UTC m=+843.799555271" Feb 19 21:13:34 crc kubenswrapper[4886]: I0219 21:13:34.169806 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-kd2q6" event={"ID":"dd893343-b95f-4900-beb8-30e55011cf84","Type":"ContainerStarted","Data":"0be925fc7fc610744e4d35cfb7c52e74a0b93c1ad13a299a597ed469d7a47e76"} Feb 19 21:13:34 crc kubenswrapper[4886]: I0219 21:13:34.208419 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-kd2q6" podStartSLOduration=1.803330855 podStartE2EDuration="8.208392413s" podCreationTimestamp="2026-02-19 21:13:26 +0000 UTC" firstStartedPulling="2026-02-19 21:13:26.954131742 +0000 UTC m=+837.581974802" lastFinishedPulling="2026-02-19 21:13:33.35919329 +0000 UTC m=+843.987036360" observedRunningTime="2026-02-19 21:13:34.197514732 +0000 UTC m=+844.825357812" watchObservedRunningTime="2026-02-19 21:13:34.208392413 +0000 UTC m=+844.836235503" Feb 19 21:13:34 crc kubenswrapper[4886]: I0219 21:13:34.662673 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wns9m" Feb 19 21:13:34 crc kubenswrapper[4886]: I0219 21:13:34.743708 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wns9m" Feb 19 21:13:34 crc kubenswrapper[4886]: I0219 21:13:34.911810 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wns9m"] Feb 19 21:13:36 crc kubenswrapper[4886]: I0219 21:13:36.186802 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wns9m" podUID="affc0609-88b0-492e-9ff6-33cd6bb58188" containerName="registry-server" containerID="cri-o://1d1c41b29bce73b0457e1afd0f0784ac5f95e0b234898dff905b68feccc0fa27" gracePeriod=2 Feb 19 21:13:36 crc kubenswrapper[4886]: I0219 21:13:36.535548 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-7s5zm" Feb 19 21:13:36 crc kubenswrapper[4886]: I0219 21:13:36.668100 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wns9m" Feb 19 21:13:36 crc kubenswrapper[4886]: I0219 21:13:36.753104 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affc0609-88b0-492e-9ff6-33cd6bb58188-catalog-content\") pod \"affc0609-88b0-492e-9ff6-33cd6bb58188\" (UID: \"affc0609-88b0-492e-9ff6-33cd6bb58188\") " Feb 19 21:13:36 crc kubenswrapper[4886]: I0219 21:13:36.753341 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22vhh\" (UniqueName: \"kubernetes.io/projected/affc0609-88b0-492e-9ff6-33cd6bb58188-kube-api-access-22vhh\") pod \"affc0609-88b0-492e-9ff6-33cd6bb58188\" (UID: \"affc0609-88b0-492e-9ff6-33cd6bb58188\") " Feb 19 21:13:36 crc kubenswrapper[4886]: I0219 21:13:36.753620 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affc0609-88b0-492e-9ff6-33cd6bb58188-utilities\") pod \"affc0609-88b0-492e-9ff6-33cd6bb58188\" (UID: \"affc0609-88b0-492e-9ff6-33cd6bb58188\") " Feb 19 21:13:36 crc kubenswrapper[4886]: I0219 21:13:36.755340 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/affc0609-88b0-492e-9ff6-33cd6bb58188-utilities" (OuterVolumeSpecName: "utilities") pod "affc0609-88b0-492e-9ff6-33cd6bb58188" (UID: "affc0609-88b0-492e-9ff6-33cd6bb58188"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:13:36 crc kubenswrapper[4886]: I0219 21:13:36.762489 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/affc0609-88b0-492e-9ff6-33cd6bb58188-kube-api-access-22vhh" (OuterVolumeSpecName: "kube-api-access-22vhh") pod "affc0609-88b0-492e-9ff6-33cd6bb58188" (UID: "affc0609-88b0-492e-9ff6-33cd6bb58188"). InnerVolumeSpecName "kube-api-access-22vhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:13:36 crc kubenswrapper[4886]: I0219 21:13:36.857145 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22vhh\" (UniqueName: \"kubernetes.io/projected/affc0609-88b0-492e-9ff6-33cd6bb58188-kube-api-access-22vhh\") on node \"crc\" DevicePath \"\"" Feb 19 21:13:36 crc kubenswrapper[4886]: I0219 21:13:36.857214 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affc0609-88b0-492e-9ff6-33cd6bb58188-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:13:36 crc kubenswrapper[4886]: I0219 21:13:36.875967 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:36 crc kubenswrapper[4886]: I0219 21:13:36.876470 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:36 crc kubenswrapper[4886]: I0219 21:13:36.883917 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:36 crc kubenswrapper[4886]: I0219 21:13:36.930241 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/affc0609-88b0-492e-9ff6-33cd6bb58188-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "affc0609-88b0-492e-9ff6-33cd6bb58188" (UID: "affc0609-88b0-492e-9ff6-33cd6bb58188"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:13:36 crc kubenswrapper[4886]: I0219 21:13:36.958953 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affc0609-88b0-492e-9ff6-33cd6bb58188-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:13:37 crc kubenswrapper[4886]: I0219 21:13:37.204689 4886 generic.go:334] "Generic (PLEG): container finished" podID="affc0609-88b0-492e-9ff6-33cd6bb58188" containerID="1d1c41b29bce73b0457e1afd0f0784ac5f95e0b234898dff905b68feccc0fa27" exitCode=0 Feb 19 21:13:37 crc kubenswrapper[4886]: I0219 21:13:37.204791 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wns9m" event={"ID":"affc0609-88b0-492e-9ff6-33cd6bb58188","Type":"ContainerDied","Data":"1d1c41b29bce73b0457e1afd0f0784ac5f95e0b234898dff905b68feccc0fa27"} Feb 19 21:13:37 crc kubenswrapper[4886]: I0219 21:13:37.207350 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wns9m" event={"ID":"affc0609-88b0-492e-9ff6-33cd6bb58188","Type":"ContainerDied","Data":"3546a1018ca2700be4961d3341e8b5bd0fe60584d93517e4d79390e8d3c3c498"} Feb 19 21:13:37 crc kubenswrapper[4886]: I0219 21:13:37.207619 4886 scope.go:117] "RemoveContainer" containerID="1d1c41b29bce73b0457e1afd0f0784ac5f95e0b234898dff905b68feccc0fa27" Feb 19 21:13:37 crc kubenswrapper[4886]: I0219 21:13:37.204841 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wns9m" Feb 19 21:13:37 crc kubenswrapper[4886]: I0219 21:13:37.243944 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:13:37 crc kubenswrapper[4886]: I0219 21:13:37.268427 4886 scope.go:117] "RemoveContainer" containerID="fcdc515ebf456014cb6a1ea85195a53ff24727678fad24d82d74e85ce6df83d8" Feb 19 21:13:37 crc kubenswrapper[4886]: I0219 21:13:37.305630 4886 scope.go:117] "RemoveContainer" containerID="fbb76513a1ea73b7aff996cd953b99b61fbde64fddb87ac7b3b3fe0565cacd19" Feb 19 21:13:37 crc kubenswrapper[4886]: I0219 21:13:37.333815 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wns9m"] Feb 19 21:13:37 crc kubenswrapper[4886]: I0219 21:13:37.342579 4886 scope.go:117] "RemoveContainer" containerID="1d1c41b29bce73b0457e1afd0f0784ac5f95e0b234898dff905b68feccc0fa27" Feb 19 21:13:37 crc kubenswrapper[4886]: I0219 21:13:37.344385 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wns9m"] Feb 19 21:13:37 crc kubenswrapper[4886]: E0219 21:13:37.344820 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d1c41b29bce73b0457e1afd0f0784ac5f95e0b234898dff905b68feccc0fa27\": container with ID starting with 1d1c41b29bce73b0457e1afd0f0784ac5f95e0b234898dff905b68feccc0fa27 not found: ID does not exist" containerID="1d1c41b29bce73b0457e1afd0f0784ac5f95e0b234898dff905b68feccc0fa27" Feb 19 21:13:37 crc kubenswrapper[4886]: I0219 21:13:37.344880 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d1c41b29bce73b0457e1afd0f0784ac5f95e0b234898dff905b68feccc0fa27"} err="failed to get container status \"1d1c41b29bce73b0457e1afd0f0784ac5f95e0b234898dff905b68feccc0fa27\": rpc error: code = NotFound desc = could not find container \"1d1c41b29bce73b0457e1afd0f0784ac5f95e0b234898dff905b68feccc0fa27\": container with ID starting with 1d1c41b29bce73b0457e1afd0f0784ac5f95e0b234898dff905b68feccc0fa27 not found: ID does not exist" Feb 19 21:13:37 crc kubenswrapper[4886]: I0219 21:13:37.344912 4886 scope.go:117] "RemoveContainer" containerID="fcdc515ebf456014cb6a1ea85195a53ff24727678fad24d82d74e85ce6df83d8" Feb 19 21:13:37 crc kubenswrapper[4886]: E0219 21:13:37.345231 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcdc515ebf456014cb6a1ea85195a53ff24727678fad24d82d74e85ce6df83d8\": container with ID starting with fcdc515ebf456014cb6a1ea85195a53ff24727678fad24d82d74e85ce6df83d8 not found: ID does not exist" containerID="fcdc515ebf456014cb6a1ea85195a53ff24727678fad24d82d74e85ce6df83d8" Feb 19 21:13:37 crc kubenswrapper[4886]: I0219 21:13:37.345255 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcdc515ebf456014cb6a1ea85195a53ff24727678fad24d82d74e85ce6df83d8"} err="failed to get container status \"fcdc515ebf456014cb6a1ea85195a53ff24727678fad24d82d74e85ce6df83d8\": rpc error: code = NotFound desc = could not find container \"fcdc515ebf456014cb6a1ea85195a53ff24727678fad24d82d74e85ce6df83d8\": container with ID starting with fcdc515ebf456014cb6a1ea85195a53ff24727678fad24d82d74e85ce6df83d8 not found: ID does not exist" Feb 19 21:13:37 crc kubenswrapper[4886]: I0219 21:13:37.345290 4886 scope.go:117] "RemoveContainer" containerID="fbb76513a1ea73b7aff996cd953b99b61fbde64fddb87ac7b3b3fe0565cacd19" Feb 19 21:13:37 crc kubenswrapper[4886]: E0219 21:13:37.345527 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbb76513a1ea73b7aff996cd953b99b61fbde64fddb87ac7b3b3fe0565cacd19\": container with ID starting with fbb76513a1ea73b7aff996cd953b99b61fbde64fddb87ac7b3b3fe0565cacd19 not found: ID does not exist" containerID="fbb76513a1ea73b7aff996cd953b99b61fbde64fddb87ac7b3b3fe0565cacd19" Feb 19 21:13:37 crc kubenswrapper[4886]: I0219 21:13:37.345580 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbb76513a1ea73b7aff996cd953b99b61fbde64fddb87ac7b3b3fe0565cacd19"} err="failed to get container status \"fbb76513a1ea73b7aff996cd953b99b61fbde64fddb87ac7b3b3fe0565cacd19\": rpc error: code = NotFound desc = could not find container \"fbb76513a1ea73b7aff996cd953b99b61fbde64fddb87ac7b3b3fe0565cacd19\": container with ID starting with fbb76513a1ea73b7aff996cd953b99b61fbde64fddb87ac7b3b3fe0565cacd19 not found: ID does not exist" Feb 19 21:13:37 crc kubenswrapper[4886]: I0219 21:13:37.357190 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-75cd8bb48b-w5lkt"] Feb 19 21:13:38 crc kubenswrapper[4886]: I0219 21:13:38.615831 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="affc0609-88b0-492e-9ff6-33cd6bb58188" path="/var/lib/kubelet/pods/affc0609-88b0-492e-9ff6-33cd6bb58188/volumes" Feb 19 21:13:47 crc kubenswrapper[4886]: I0219 21:13:47.047356 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6zf9p" Feb 19 21:14:02 crc kubenswrapper[4886]: I0219 21:14:02.413474 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-75cd8bb48b-w5lkt" podUID="65d827c6-eb13-4786-ab1a-df0067a3772e" containerName="console" containerID="cri-o://9e5c7c9d8cb2ecddca04123c922dc68b54834ef4d05b0594ade71e295052b0ca" gracePeriod=15 Feb 19 21:14:02 crc kubenswrapper[4886]: I0219 21:14:02.846420 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-75cd8bb48b-w5lkt_65d827c6-eb13-4786-ab1a-df0067a3772e/console/0.log" Feb 19 21:14:02 crc kubenswrapper[4886]: I0219 21:14:02.847100 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:14:02 crc kubenswrapper[4886]: I0219 21:14:02.991901 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/65d827c6-eb13-4786-ab1a-df0067a3772e-console-serving-cert\") pod \"65d827c6-eb13-4786-ab1a-df0067a3772e\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " Feb 19 21:14:02 crc kubenswrapper[4886]: I0219 21:14:02.992765 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/65d827c6-eb13-4786-ab1a-df0067a3772e-oauth-serving-cert\") pod \"65d827c6-eb13-4786-ab1a-df0067a3772e\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " Feb 19 21:14:02 crc kubenswrapper[4886]: I0219 21:14:02.992807 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/65d827c6-eb13-4786-ab1a-df0067a3772e-console-oauth-config\") pod \"65d827c6-eb13-4786-ab1a-df0067a3772e\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " Feb 19 21:14:02 crc kubenswrapper[4886]: I0219 21:14:02.992923 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnvjc\" (UniqueName: \"kubernetes.io/projected/65d827c6-eb13-4786-ab1a-df0067a3772e-kube-api-access-jnvjc\") pod \"65d827c6-eb13-4786-ab1a-df0067a3772e\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " Feb 19 21:14:02 crc kubenswrapper[4886]: I0219 21:14:02.992967 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65d827c6-eb13-4786-ab1a-df0067a3772e-trusted-ca-bundle\") pod \"65d827c6-eb13-4786-ab1a-df0067a3772e\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " Feb 19 21:14:02 crc kubenswrapper[4886]: I0219 21:14:02.993008 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/65d827c6-eb13-4786-ab1a-df0067a3772e-console-config\") pod \"65d827c6-eb13-4786-ab1a-df0067a3772e\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " Feb 19 21:14:02 crc kubenswrapper[4886]: I0219 21:14:02.993065 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/65d827c6-eb13-4786-ab1a-df0067a3772e-service-ca\") pod \"65d827c6-eb13-4786-ab1a-df0067a3772e\" (UID: \"65d827c6-eb13-4786-ab1a-df0067a3772e\") " Feb 19 21:14:02 crc kubenswrapper[4886]: I0219 21:14:02.993725 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65d827c6-eb13-4786-ab1a-df0067a3772e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "65d827c6-eb13-4786-ab1a-df0067a3772e" (UID: "65d827c6-eb13-4786-ab1a-df0067a3772e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:14:02 crc kubenswrapper[4886]: I0219 21:14:02.994008 4886 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/65d827c6-eb13-4786-ab1a-df0067a3772e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 21:14:02 crc kubenswrapper[4886]: I0219 21:14:02.994431 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65d827c6-eb13-4786-ab1a-df0067a3772e-service-ca" (OuterVolumeSpecName: "service-ca") pod "65d827c6-eb13-4786-ab1a-df0067a3772e" (UID: "65d827c6-eb13-4786-ab1a-df0067a3772e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:14:02 crc kubenswrapper[4886]: I0219 21:14:02.994495 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65d827c6-eb13-4786-ab1a-df0067a3772e-console-config" (OuterVolumeSpecName: "console-config") pod "65d827c6-eb13-4786-ab1a-df0067a3772e" (UID: "65d827c6-eb13-4786-ab1a-df0067a3772e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:14:02 crc kubenswrapper[4886]: I0219 21:14:02.995400 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65d827c6-eb13-4786-ab1a-df0067a3772e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "65d827c6-eb13-4786-ab1a-df0067a3772e" (UID: "65d827c6-eb13-4786-ab1a-df0067a3772e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:14:02 crc kubenswrapper[4886]: I0219 21:14:02.997485 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65d827c6-eb13-4786-ab1a-df0067a3772e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "65d827c6-eb13-4786-ab1a-df0067a3772e" (UID: "65d827c6-eb13-4786-ab1a-df0067a3772e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:14:02 crc kubenswrapper[4886]: I0219 21:14:02.998558 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65d827c6-eb13-4786-ab1a-df0067a3772e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "65d827c6-eb13-4786-ab1a-df0067a3772e" (UID: "65d827c6-eb13-4786-ab1a-df0067a3772e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:14:02 crc kubenswrapper[4886]: I0219 21:14:02.999362 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65d827c6-eb13-4786-ab1a-df0067a3772e-kube-api-access-jnvjc" (OuterVolumeSpecName: "kube-api-access-jnvjc") pod "65d827c6-eb13-4786-ab1a-df0067a3772e" (UID: "65d827c6-eb13-4786-ab1a-df0067a3772e"). InnerVolumeSpecName "kube-api-access-jnvjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:14:03 crc kubenswrapper[4886]: I0219 21:14:03.095721 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65d827c6-eb13-4786-ab1a-df0067a3772e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:14:03 crc kubenswrapper[4886]: I0219 21:14:03.095763 4886 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/65d827c6-eb13-4786-ab1a-df0067a3772e-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:14:03 crc kubenswrapper[4886]: I0219 21:14:03.095774 4886 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/65d827c6-eb13-4786-ab1a-df0067a3772e-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 21:14:03 crc kubenswrapper[4886]: I0219 21:14:03.095784 4886 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/65d827c6-eb13-4786-ab1a-df0067a3772e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 21:14:03 crc kubenswrapper[4886]: I0219 21:14:03.095794 4886 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/65d827c6-eb13-4786-ab1a-df0067a3772e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:14:03 crc kubenswrapper[4886]: I0219 21:14:03.095803 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnvjc\" (UniqueName: \"kubernetes.io/projected/65d827c6-eb13-4786-ab1a-df0067a3772e-kube-api-access-jnvjc\") on node \"crc\" DevicePath \"\"" Feb 19 21:14:03 crc kubenswrapper[4886]: I0219 21:14:03.451022 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-75cd8bb48b-w5lkt_65d827c6-eb13-4786-ab1a-df0067a3772e/console/0.log" Feb 19 21:14:03 crc kubenswrapper[4886]: I0219 21:14:03.451355 4886 generic.go:334] "Generic (PLEG): container finished" podID="65d827c6-eb13-4786-ab1a-df0067a3772e" containerID="9e5c7c9d8cb2ecddca04123c922dc68b54834ef4d05b0594ade71e295052b0ca" exitCode=2 Feb 19 21:14:03 crc kubenswrapper[4886]: I0219 21:14:03.451383 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75cd8bb48b-w5lkt" event={"ID":"65d827c6-eb13-4786-ab1a-df0067a3772e","Type":"ContainerDied","Data":"9e5c7c9d8cb2ecddca04123c922dc68b54834ef4d05b0594ade71e295052b0ca"} Feb 19 21:14:03 crc kubenswrapper[4886]: I0219 21:14:03.451414 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75cd8bb48b-w5lkt" event={"ID":"65d827c6-eb13-4786-ab1a-df0067a3772e","Type":"ContainerDied","Data":"2409cee877d8620d93013d5eafc035c75cc9e81853726e7d807e5e96415273d3"} Feb 19 21:14:03 crc kubenswrapper[4886]: I0219 21:14:03.451430 4886 scope.go:117] "RemoveContainer" containerID="9e5c7c9d8cb2ecddca04123c922dc68b54834ef4d05b0594ade71e295052b0ca" Feb 19 21:14:03 crc kubenswrapper[4886]: I0219 21:14:03.451454 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75cd8bb48b-w5lkt" Feb 19 21:14:03 crc kubenswrapper[4886]: I0219 21:14:03.474768 4886 scope.go:117] "RemoveContainer" containerID="9e5c7c9d8cb2ecddca04123c922dc68b54834ef4d05b0594ade71e295052b0ca" Feb 19 21:14:03 crc kubenswrapper[4886]: E0219 21:14:03.475505 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e5c7c9d8cb2ecddca04123c922dc68b54834ef4d05b0594ade71e295052b0ca\": container with ID starting with 9e5c7c9d8cb2ecddca04123c922dc68b54834ef4d05b0594ade71e295052b0ca not found: ID does not exist" containerID="9e5c7c9d8cb2ecddca04123c922dc68b54834ef4d05b0594ade71e295052b0ca" Feb 19 21:14:03 crc kubenswrapper[4886]: I0219 21:14:03.475554 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e5c7c9d8cb2ecddca04123c922dc68b54834ef4d05b0594ade71e295052b0ca"} err="failed to get container status \"9e5c7c9d8cb2ecddca04123c922dc68b54834ef4d05b0594ade71e295052b0ca\": rpc error: code = NotFound desc = could not find container \"9e5c7c9d8cb2ecddca04123c922dc68b54834ef4d05b0594ade71e295052b0ca\": container with ID starting with 9e5c7c9d8cb2ecddca04123c922dc68b54834ef4d05b0594ade71e295052b0ca not found: ID does not exist" Feb 19 21:14:03 crc kubenswrapper[4886]: I0219 21:14:03.499099 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-75cd8bb48b-w5lkt"] Feb 19 21:14:03 crc kubenswrapper[4886]: I0219 21:14:03.505777 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-75cd8bb48b-w5lkt"] Feb 19 21:14:04 crc kubenswrapper[4886]: I0219 21:14:04.609648 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65d827c6-eb13-4786-ab1a-df0067a3772e" path="/var/lib/kubelet/pods/65d827c6-eb13-4786-ab1a-df0067a3772e/volumes" Feb 19 21:14:06 crc kubenswrapper[4886]: I0219 21:14:06.741723 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv"] Feb 19 21:14:06 crc kubenswrapper[4886]: E0219 21:14:06.742383 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="affc0609-88b0-492e-9ff6-33cd6bb58188" containerName="extract-content" Feb 19 21:14:06 crc kubenswrapper[4886]: I0219 21:14:06.742398 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="affc0609-88b0-492e-9ff6-33cd6bb58188" containerName="extract-content" Feb 19 21:14:06 crc kubenswrapper[4886]: E0219 21:14:06.742423 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="affc0609-88b0-492e-9ff6-33cd6bb58188" containerName="extract-utilities" Feb 19 21:14:06 crc kubenswrapper[4886]: I0219 21:14:06.742430 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="affc0609-88b0-492e-9ff6-33cd6bb58188" containerName="extract-utilities" Feb 19 21:14:06 crc kubenswrapper[4886]: E0219 21:14:06.742442 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="affc0609-88b0-492e-9ff6-33cd6bb58188" containerName="registry-server" Feb 19 21:14:06 crc kubenswrapper[4886]: I0219 21:14:06.742449 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="affc0609-88b0-492e-9ff6-33cd6bb58188" containerName="registry-server" Feb 19 21:14:06 crc kubenswrapper[4886]: E0219 21:14:06.742463 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65d827c6-eb13-4786-ab1a-df0067a3772e" containerName="console" Feb 19 21:14:06 crc kubenswrapper[4886]: I0219 21:14:06.742469 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="65d827c6-eb13-4786-ab1a-df0067a3772e" containerName="console" Feb 19 21:14:06 crc kubenswrapper[4886]: I0219 21:14:06.742623 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="affc0609-88b0-492e-9ff6-33cd6bb58188" containerName="registry-server" Feb 19 21:14:06 crc kubenswrapper[4886]: I0219 21:14:06.742640 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="65d827c6-eb13-4786-ab1a-df0067a3772e" containerName="console" Feb 19 21:14:06 crc kubenswrapper[4886]: I0219 21:14:06.743943 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv" Feb 19 21:14:06 crc kubenswrapper[4886]: I0219 21:14:06.746961 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv"] Feb 19 21:14:06 crc kubenswrapper[4886]: I0219 21:14:06.753183 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 21:14:06 crc kubenswrapper[4886]: I0219 21:14:06.856339 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gft4x\" (UniqueName: \"kubernetes.io/projected/eda617d6-792e-4b30-b382-81d44d4bd445-kube-api-access-gft4x\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv\" (UID: \"eda617d6-792e-4b30-b382-81d44d4bd445\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv" Feb 19 21:14:06 crc kubenswrapper[4886]: I0219 21:14:06.856469 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eda617d6-792e-4b30-b382-81d44d4bd445-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv\" (UID: \"eda617d6-792e-4b30-b382-81d44d4bd445\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv" Feb 19 21:14:06 crc kubenswrapper[4886]: I0219 21:14:06.856495 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eda617d6-792e-4b30-b382-81d44d4bd445-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv\" (UID: \"eda617d6-792e-4b30-b382-81d44d4bd445\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv" Feb 19 21:14:06 crc kubenswrapper[4886]: I0219 21:14:06.958446 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eda617d6-792e-4b30-b382-81d44d4bd445-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv\" (UID: \"eda617d6-792e-4b30-b382-81d44d4bd445\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv" Feb 19 21:14:06 crc kubenswrapper[4886]: I0219 21:14:06.958522 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eda617d6-792e-4b30-b382-81d44d4bd445-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv\" (UID: \"eda617d6-792e-4b30-b382-81d44d4bd445\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv" Feb 19 21:14:06 crc kubenswrapper[4886]: I0219 21:14:06.958640 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gft4x\" (UniqueName: \"kubernetes.io/projected/eda617d6-792e-4b30-b382-81d44d4bd445-kube-api-access-gft4x\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv\" (UID: \"eda617d6-792e-4b30-b382-81d44d4bd445\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv" Feb 19 21:14:06 crc kubenswrapper[4886]: I0219 21:14:06.959221 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eda617d6-792e-4b30-b382-81d44d4bd445-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv\" (UID: \"eda617d6-792e-4b30-b382-81d44d4bd445\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv" Feb 19 21:14:06 crc kubenswrapper[4886]: I0219 21:14:06.959869 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eda617d6-792e-4b30-b382-81d44d4bd445-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv\" (UID: \"eda617d6-792e-4b30-b382-81d44d4bd445\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv" Feb 19 21:14:06 crc kubenswrapper[4886]: I0219 21:14:06.983316 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gft4x\" (UniqueName: \"kubernetes.io/projected/eda617d6-792e-4b30-b382-81d44d4bd445-kube-api-access-gft4x\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv\" (UID: \"eda617d6-792e-4b30-b382-81d44d4bd445\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv" Feb 19 21:14:07 crc kubenswrapper[4886]: I0219 21:14:07.080097 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv" Feb 19 21:14:07 crc kubenswrapper[4886]: I0219 21:14:07.650226 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv"] Feb 19 21:14:08 crc kubenswrapper[4886]: I0219 21:14:08.490255 4886 generic.go:334] "Generic (PLEG): container finished" podID="eda617d6-792e-4b30-b382-81d44d4bd445" containerID="5d1a4a7a73e081d4f0a5e23c0eeddcb8a0b20140229f5f9884de373bbbedd55f" exitCode=0 Feb 19 21:14:08 crc kubenswrapper[4886]: I0219 21:14:08.490432 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv" event={"ID":"eda617d6-792e-4b30-b382-81d44d4bd445","Type":"ContainerDied","Data":"5d1a4a7a73e081d4f0a5e23c0eeddcb8a0b20140229f5f9884de373bbbedd55f"} Feb 19 21:14:08 crc kubenswrapper[4886]: I0219 21:14:08.490770 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv" event={"ID":"eda617d6-792e-4b30-b382-81d44d4bd445","Type":"ContainerStarted","Data":"59d5eb88ff8aded79e90f72c6f96ba906de156908518b74c5da120811bf45b6a"} Feb 19 21:14:08 crc kubenswrapper[4886]: I0219 21:14:08.492706 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 21:14:10 crc kubenswrapper[4886]: I0219 21:14:10.518691 4886 generic.go:334] "Generic (PLEG): container finished" podID="eda617d6-792e-4b30-b382-81d44d4bd445" containerID="5d9c3c812b18edba2695f922ec020c8e52a2f95616b6d59efbf3c42efa5b6267" exitCode=0 Feb 19 21:14:10 crc kubenswrapper[4886]: I0219 21:14:10.518777 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv" event={"ID":"eda617d6-792e-4b30-b382-81d44d4bd445","Type":"ContainerDied","Data":"5d9c3c812b18edba2695f922ec020c8e52a2f95616b6d59efbf3c42efa5b6267"} Feb 19 21:14:11 crc kubenswrapper[4886]: I0219 21:14:11.534497 4886 generic.go:334] "Generic (PLEG): container finished" podID="eda617d6-792e-4b30-b382-81d44d4bd445" containerID="27d9df0b2ee8195b9cb54044d6fed932a29515819f19ed7a93e7e2ba0446b156" exitCode=0 Feb 19 21:14:11 crc kubenswrapper[4886]: I0219 21:14:11.534562 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv" event={"ID":"eda617d6-792e-4b30-b382-81d44d4bd445","Type":"ContainerDied","Data":"27d9df0b2ee8195b9cb54044d6fed932a29515819f19ed7a93e7e2ba0446b156"} Feb 19 21:14:12 crc kubenswrapper[4886]: I0219 21:14:12.888573 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv" Feb 19 21:14:12 crc kubenswrapper[4886]: I0219 21:14:12.906676 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eda617d6-792e-4b30-b382-81d44d4bd445-util\") pod \"eda617d6-792e-4b30-b382-81d44d4bd445\" (UID: \"eda617d6-792e-4b30-b382-81d44d4bd445\") " Feb 19 21:14:12 crc kubenswrapper[4886]: I0219 21:14:12.907045 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eda617d6-792e-4b30-b382-81d44d4bd445-bundle\") pod \"eda617d6-792e-4b30-b382-81d44d4bd445\" (UID: \"eda617d6-792e-4b30-b382-81d44d4bd445\") " Feb 19 21:14:12 crc kubenswrapper[4886]: I0219 21:14:12.907343 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gft4x\" (UniqueName: \"kubernetes.io/projected/eda617d6-792e-4b30-b382-81d44d4bd445-kube-api-access-gft4x\") pod \"eda617d6-792e-4b30-b382-81d44d4bd445\" (UID: \"eda617d6-792e-4b30-b382-81d44d4bd445\") " Feb 19 21:14:12 crc kubenswrapper[4886]: I0219 21:14:12.910043 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eda617d6-792e-4b30-b382-81d44d4bd445-bundle" (OuterVolumeSpecName: "bundle") pod "eda617d6-792e-4b30-b382-81d44d4bd445" (UID: "eda617d6-792e-4b30-b382-81d44d4bd445"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:14:12 crc kubenswrapper[4886]: I0219 21:14:12.917107 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda617d6-792e-4b30-b382-81d44d4bd445-kube-api-access-gft4x" (OuterVolumeSpecName: "kube-api-access-gft4x") pod "eda617d6-792e-4b30-b382-81d44d4bd445" (UID: "eda617d6-792e-4b30-b382-81d44d4bd445"). InnerVolumeSpecName "kube-api-access-gft4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:14:12 crc kubenswrapper[4886]: I0219 21:14:12.924296 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eda617d6-792e-4b30-b382-81d44d4bd445-util" (OuterVolumeSpecName: "util") pod "eda617d6-792e-4b30-b382-81d44d4bd445" (UID: "eda617d6-792e-4b30-b382-81d44d4bd445"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:14:13 crc kubenswrapper[4886]: I0219 21:14:13.011209 4886 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eda617d6-792e-4b30-b382-81d44d4bd445-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:14:13 crc kubenswrapper[4886]: I0219 21:14:13.011285 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gft4x\" (UniqueName: \"kubernetes.io/projected/eda617d6-792e-4b30-b382-81d44d4bd445-kube-api-access-gft4x\") on node \"crc\" DevicePath \"\"" Feb 19 21:14:13 crc kubenswrapper[4886]: I0219 21:14:13.011308 4886 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eda617d6-792e-4b30-b382-81d44d4bd445-util\") on node \"crc\" DevicePath \"\"" Feb 19 21:14:13 crc kubenswrapper[4886]: I0219 21:14:13.555230 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv" event={"ID":"eda617d6-792e-4b30-b382-81d44d4bd445","Type":"ContainerDied","Data":"59d5eb88ff8aded79e90f72c6f96ba906de156908518b74c5da120811bf45b6a"} Feb 19 21:14:13 crc kubenswrapper[4886]: I0219 21:14:13.555300 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59d5eb88ff8aded79e90f72c6f96ba906de156908518b74c5da120811bf45b6a" Feb 19 21:14:13 crc kubenswrapper[4886]: I0219 21:14:13.555394 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21342gvv" Feb 19 21:14:18 crc kubenswrapper[4886]: I0219 21:14:18.324930 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:14:18 crc kubenswrapper[4886]: I0219 21:14:18.325639 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:14:24 crc kubenswrapper[4886]: I0219 21:14:24.745059 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m"] Feb 19 21:14:24 crc kubenswrapper[4886]: E0219 21:14:24.745952 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda617d6-792e-4b30-b382-81d44d4bd445" containerName="pull" Feb 19 21:14:24 crc kubenswrapper[4886]: I0219 21:14:24.745967 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda617d6-792e-4b30-b382-81d44d4bd445" containerName="pull" Feb 19 21:14:24 crc kubenswrapper[4886]: E0219 21:14:24.745990 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda617d6-792e-4b30-b382-81d44d4bd445" containerName="extract" Feb 19 21:14:24 crc kubenswrapper[4886]: I0219 21:14:24.745997 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda617d6-792e-4b30-b382-81d44d4bd445" containerName="extract" Feb 19 21:14:24 crc kubenswrapper[4886]: E0219 21:14:24.746013 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda617d6-792e-4b30-b382-81d44d4bd445" containerName="util" Feb 19 21:14:24 crc kubenswrapper[4886]: I0219 21:14:24.746021 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda617d6-792e-4b30-b382-81d44d4bd445" containerName="util" Feb 19 21:14:24 crc kubenswrapper[4886]: I0219 21:14:24.746188 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda617d6-792e-4b30-b382-81d44d4bd445" containerName="extract" Feb 19 21:14:24 crc kubenswrapper[4886]: I0219 21:14:24.746820 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m" Feb 19 21:14:24 crc kubenswrapper[4886]: I0219 21:14:24.752638 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 19 21:14:24 crc kubenswrapper[4886]: I0219 21:14:24.752899 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 19 21:14:24 crc kubenswrapper[4886]: I0219 21:14:24.753113 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 19 21:14:24 crc kubenswrapper[4886]: I0219 21:14:24.753807 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-476nj" Feb 19 21:14:24 crc kubenswrapper[4886]: I0219 21:14:24.753964 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 19 21:14:24 crc kubenswrapper[4886]: I0219 21:14:24.774934 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m"] Feb 19 21:14:24 crc kubenswrapper[4886]: I0219 21:14:24.803074 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8r6j\" (UniqueName: \"kubernetes.io/projected/e51c0ebd-f319-41ea-9f7e-17bca0f30b6c-kube-api-access-z8r6j\") pod \"metallb-operator-controller-manager-75f8dcd7db-4522m\" (UID: \"e51c0ebd-f319-41ea-9f7e-17bca0f30b6c\") " pod="metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m" Feb 19 21:14:24 crc kubenswrapper[4886]: I0219 21:14:24.803181 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e51c0ebd-f319-41ea-9f7e-17bca0f30b6c-apiservice-cert\") pod \"metallb-operator-controller-manager-75f8dcd7db-4522m\" (UID: \"e51c0ebd-f319-41ea-9f7e-17bca0f30b6c\") " pod="metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m" Feb 19 21:14:24 crc kubenswrapper[4886]: I0219 21:14:24.803216 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e51c0ebd-f319-41ea-9f7e-17bca0f30b6c-webhook-cert\") pod \"metallb-operator-controller-manager-75f8dcd7db-4522m\" (UID: \"e51c0ebd-f319-41ea-9f7e-17bca0f30b6c\") " pod="metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m" Feb 19 21:14:24 crc kubenswrapper[4886]: I0219 21:14:24.904689 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e51c0ebd-f319-41ea-9f7e-17bca0f30b6c-webhook-cert\") pod \"metallb-operator-controller-manager-75f8dcd7db-4522m\" (UID: \"e51c0ebd-f319-41ea-9f7e-17bca0f30b6c\") " pod="metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m" Feb 19 21:14:24 crc kubenswrapper[4886]: I0219 21:14:24.904794 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8r6j\" (UniqueName: \"kubernetes.io/projected/e51c0ebd-f319-41ea-9f7e-17bca0f30b6c-kube-api-access-z8r6j\") pod \"metallb-operator-controller-manager-75f8dcd7db-4522m\" (UID: \"e51c0ebd-f319-41ea-9f7e-17bca0f30b6c\") " pod="metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m" Feb 19 21:14:24 crc kubenswrapper[4886]: I0219 21:14:24.904871 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e51c0ebd-f319-41ea-9f7e-17bca0f30b6c-apiservice-cert\") pod \"metallb-operator-controller-manager-75f8dcd7db-4522m\" (UID: \"e51c0ebd-f319-41ea-9f7e-17bca0f30b6c\") " pod="metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m" Feb 19 21:14:24 crc kubenswrapper[4886]: I0219 21:14:24.912360 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e51c0ebd-f319-41ea-9f7e-17bca0f30b6c-webhook-cert\") pod \"metallb-operator-controller-manager-75f8dcd7db-4522m\" (UID: \"e51c0ebd-f319-41ea-9f7e-17bca0f30b6c\") " pod="metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m" Feb 19 21:14:24 crc kubenswrapper[4886]: I0219 21:14:24.924670 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8r6j\" (UniqueName: \"kubernetes.io/projected/e51c0ebd-f319-41ea-9f7e-17bca0f30b6c-kube-api-access-z8r6j\") pod \"metallb-operator-controller-manager-75f8dcd7db-4522m\" (UID: \"e51c0ebd-f319-41ea-9f7e-17bca0f30b6c\") " pod="metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m" Feb 19 21:14:24 crc kubenswrapper[4886]: I0219 21:14:24.928946 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e51c0ebd-f319-41ea-9f7e-17bca0f30b6c-apiservice-cert\") pod \"metallb-operator-controller-manager-75f8dcd7db-4522m\" (UID: \"e51c0ebd-f319-41ea-9f7e-17bca0f30b6c\") " pod="metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m" Feb 19 21:14:25 crc kubenswrapper[4886]: I0219 21:14:25.068943 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m" Feb 19 21:14:25 crc kubenswrapper[4886]: I0219 21:14:25.194629 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64"] Feb 19 21:14:25 crc kubenswrapper[4886]: I0219 21:14:25.196008 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" Feb 19 21:14:25 crc kubenswrapper[4886]: I0219 21:14:25.200594 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 21:14:25 crc kubenswrapper[4886]: I0219 21:14:25.200633 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 19 21:14:25 crc kubenswrapper[4886]: I0219 21:14:25.200945 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-vqctv" Feb 19 21:14:25 crc kubenswrapper[4886]: I0219 21:14:25.213888 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64"] Feb 19 21:14:25 crc kubenswrapper[4886]: I0219 21:14:25.311642 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czmgh\" (UniqueName: \"kubernetes.io/projected/5fa1c852-1a91-4a17-9edf-42db1180c6a9-kube-api-access-czmgh\") pod \"metallb-operator-webhook-server-5d96cd6488-wwn64\" (UID: \"5fa1c852-1a91-4a17-9edf-42db1180c6a9\") " pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" Feb 19 21:14:25 crc kubenswrapper[4886]: I0219 21:14:25.311821 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5fa1c852-1a91-4a17-9edf-42db1180c6a9-webhook-cert\") pod \"metallb-operator-webhook-server-5d96cd6488-wwn64\" (UID: \"5fa1c852-1a91-4a17-9edf-42db1180c6a9\") " pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" Feb 19 21:14:25 crc kubenswrapper[4886]: I0219 21:14:25.311891 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5fa1c852-1a91-4a17-9edf-42db1180c6a9-apiservice-cert\") pod \"metallb-operator-webhook-server-5d96cd6488-wwn64\" (UID: \"5fa1c852-1a91-4a17-9edf-42db1180c6a9\") " pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" Feb 19 21:14:25 crc kubenswrapper[4886]: I0219 21:14:25.414184 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5fa1c852-1a91-4a17-9edf-42db1180c6a9-webhook-cert\") pod \"metallb-operator-webhook-server-5d96cd6488-wwn64\" (UID: \"5fa1c852-1a91-4a17-9edf-42db1180c6a9\") " pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" Feb 19 21:14:25 crc kubenswrapper[4886]: I0219 21:14:25.414301 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5fa1c852-1a91-4a17-9edf-42db1180c6a9-apiservice-cert\") pod \"metallb-operator-webhook-server-5d96cd6488-wwn64\" (UID: \"5fa1c852-1a91-4a17-9edf-42db1180c6a9\") " pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" Feb 19 21:14:25 crc kubenswrapper[4886]: I0219 21:14:25.414485 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czmgh\" (UniqueName: \"kubernetes.io/projected/5fa1c852-1a91-4a17-9edf-42db1180c6a9-kube-api-access-czmgh\") pod \"metallb-operator-webhook-server-5d96cd6488-wwn64\" (UID: \"5fa1c852-1a91-4a17-9edf-42db1180c6a9\") " pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" Feb 19 21:14:25 crc kubenswrapper[4886]: I0219 21:14:25.419031 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5fa1c852-1a91-4a17-9edf-42db1180c6a9-webhook-cert\") pod \"metallb-operator-webhook-server-5d96cd6488-wwn64\" (UID: \"5fa1c852-1a91-4a17-9edf-42db1180c6a9\") " pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" Feb 19 21:14:25 crc kubenswrapper[4886]: I0219 21:14:25.434443 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5fa1c852-1a91-4a17-9edf-42db1180c6a9-apiservice-cert\") pod \"metallb-operator-webhook-server-5d96cd6488-wwn64\" (UID: \"5fa1c852-1a91-4a17-9edf-42db1180c6a9\") " pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" Feb 19 21:14:25 crc kubenswrapper[4886]: I0219 21:14:25.438092 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czmgh\" (UniqueName: \"kubernetes.io/projected/5fa1c852-1a91-4a17-9edf-42db1180c6a9-kube-api-access-czmgh\") pod \"metallb-operator-webhook-server-5d96cd6488-wwn64\" (UID: \"5fa1c852-1a91-4a17-9edf-42db1180c6a9\") " pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" Feb 19 21:14:25 crc kubenswrapper[4886]: I0219 21:14:25.536175 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" Feb 19 21:14:25 crc kubenswrapper[4886]: I0219 21:14:25.548177 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m"] Feb 19 21:14:25 crc kubenswrapper[4886]: W0219 21:14:25.565323 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode51c0ebd_f319_41ea_9f7e_17bca0f30b6c.slice/crio-b7c0e6d5f9c5ea34b7a8aead5190f881d6ebe51249703cd71ff80973ef6d5413 WatchSource:0}: Error finding container b7c0e6d5f9c5ea34b7a8aead5190f881d6ebe51249703cd71ff80973ef6d5413: Status 404 returned error can't find the container with id b7c0e6d5f9c5ea34b7a8aead5190f881d6ebe51249703cd71ff80973ef6d5413 Feb 19 21:14:25 crc kubenswrapper[4886]: I0219 21:14:25.667322 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m" event={"ID":"e51c0ebd-f319-41ea-9f7e-17bca0f30b6c","Type":"ContainerStarted","Data":"b7c0e6d5f9c5ea34b7a8aead5190f881d6ebe51249703cd71ff80973ef6d5413"} Feb 19 21:14:26 crc kubenswrapper[4886]: I0219 21:14:26.013950 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64"] Feb 19 21:14:26 crc kubenswrapper[4886]: W0219 21:14:26.018977 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fa1c852_1a91_4a17_9edf_42db1180c6a9.slice/crio-2f34cf61de914ad8f0f7e46eb5379c5c60df4d37a4e0de303aa4aeefdf5d0ebe WatchSource:0}: Error finding container 2f34cf61de914ad8f0f7e46eb5379c5c60df4d37a4e0de303aa4aeefdf5d0ebe: Status 404 returned error can't find the container with id 2f34cf61de914ad8f0f7e46eb5379c5c60df4d37a4e0de303aa4aeefdf5d0ebe Feb 19 21:14:26 crc kubenswrapper[4886]: I0219 21:14:26.677053 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" event={"ID":"5fa1c852-1a91-4a17-9edf-42db1180c6a9","Type":"ContainerStarted","Data":"2f34cf61de914ad8f0f7e46eb5379c5c60df4d37a4e0de303aa4aeefdf5d0ebe"} Feb 19 21:14:29 crc kubenswrapper[4886]: I0219 21:14:29.228250 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bdc2x"] Feb 19 21:14:29 crc kubenswrapper[4886]: I0219 21:14:29.238665 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bdc2x" Feb 19 21:14:29 crc kubenswrapper[4886]: I0219 21:14:29.253344 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bdc2x"] Feb 19 21:14:29 crc kubenswrapper[4886]: I0219 21:14:29.305793 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb3244f8-5100-4bb4-8e89-814b4b311240-catalog-content\") pod \"certified-operators-bdc2x\" (UID: \"eb3244f8-5100-4bb4-8e89-814b4b311240\") " pod="openshift-marketplace/certified-operators-bdc2x" Feb 19 21:14:29 crc kubenswrapper[4886]: I0219 21:14:29.305921 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwbqr\" (UniqueName: \"kubernetes.io/projected/eb3244f8-5100-4bb4-8e89-814b4b311240-kube-api-access-hwbqr\") pod \"certified-operators-bdc2x\" (UID: \"eb3244f8-5100-4bb4-8e89-814b4b311240\") " pod="openshift-marketplace/certified-operators-bdc2x" Feb 19 21:14:29 crc kubenswrapper[4886]: I0219 21:14:29.305993 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb3244f8-5100-4bb4-8e89-814b4b311240-utilities\") pod \"certified-operators-bdc2x\" (UID: \"eb3244f8-5100-4bb4-8e89-814b4b311240\") " pod="openshift-marketplace/certified-operators-bdc2x" Feb 19 21:14:29 crc kubenswrapper[4886]: I0219 21:14:29.408013 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb3244f8-5100-4bb4-8e89-814b4b311240-catalog-content\") pod \"certified-operators-bdc2x\" (UID: \"eb3244f8-5100-4bb4-8e89-814b4b311240\") " pod="openshift-marketplace/certified-operators-bdc2x" Feb 19 21:14:29 crc kubenswrapper[4886]: I0219 21:14:29.408443 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwbqr\" (UniqueName: \"kubernetes.io/projected/eb3244f8-5100-4bb4-8e89-814b4b311240-kube-api-access-hwbqr\") pod \"certified-operators-bdc2x\" (UID: \"eb3244f8-5100-4bb4-8e89-814b4b311240\") " pod="openshift-marketplace/certified-operators-bdc2x" Feb 19 21:14:29 crc kubenswrapper[4886]: I0219 21:14:29.408716 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb3244f8-5100-4bb4-8e89-814b4b311240-catalog-content\") pod \"certified-operators-bdc2x\" (UID: \"eb3244f8-5100-4bb4-8e89-814b4b311240\") " pod="openshift-marketplace/certified-operators-bdc2x" Feb 19 21:14:29 crc kubenswrapper[4886]: I0219 21:14:29.409204 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb3244f8-5100-4bb4-8e89-814b4b311240-utilities\") pod \"certified-operators-bdc2x\" (UID: \"eb3244f8-5100-4bb4-8e89-814b4b311240\") " pod="openshift-marketplace/certified-operators-bdc2x" Feb 19 21:14:29 crc kubenswrapper[4886]: I0219 21:14:29.409252 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb3244f8-5100-4bb4-8e89-814b4b311240-utilities\") pod \"certified-operators-bdc2x\" (UID: \"eb3244f8-5100-4bb4-8e89-814b4b311240\") " pod="openshift-marketplace/certified-operators-bdc2x" Feb 19 21:14:29 crc kubenswrapper[4886]: I0219 21:14:29.439254 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwbqr\" (UniqueName: \"kubernetes.io/projected/eb3244f8-5100-4bb4-8e89-814b4b311240-kube-api-access-hwbqr\") pod \"certified-operators-bdc2x\" (UID: \"eb3244f8-5100-4bb4-8e89-814b4b311240\") " pod="openshift-marketplace/certified-operators-bdc2x" Feb 19 21:14:29 crc kubenswrapper[4886]: I0219 21:14:29.557186 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bdc2x" Feb 19 21:14:29 crc kubenswrapper[4886]: I0219 21:14:29.700183 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m" event={"ID":"e51c0ebd-f319-41ea-9f7e-17bca0f30b6c","Type":"ContainerStarted","Data":"cfc8ef043b2682f8db9a2219482a3c2a98044d58de0bffd6072974fc701f655d"} Feb 19 21:14:29 crc kubenswrapper[4886]: I0219 21:14:29.700319 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m" Feb 19 21:14:29 crc kubenswrapper[4886]: I0219 21:14:29.727597 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m" podStartSLOduration=2.77140945 podStartE2EDuration="5.727576132s" podCreationTimestamp="2026-02-19 21:14:24 +0000 UTC" firstStartedPulling="2026-02-19 21:14:25.566867546 +0000 UTC m=+896.194710596" lastFinishedPulling="2026-02-19 21:14:28.523034228 +0000 UTC m=+899.150877278" observedRunningTime="2026-02-19 21:14:29.724860144 +0000 UTC m=+900.352703194" watchObservedRunningTime="2026-02-19 21:14:29.727576132 +0000 UTC m=+900.355419182" Feb 19 21:14:31 crc kubenswrapper[4886]: I0219 21:14:31.466757 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bdc2x"] Feb 19 21:14:31 crc kubenswrapper[4886]: I0219 21:14:31.715156 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" event={"ID":"5fa1c852-1a91-4a17-9edf-42db1180c6a9","Type":"ContainerStarted","Data":"3ed96a5bb1f771dee8ebad466fb2a164686f5dad38b1b022d610711705cc91f0"} Feb 19 21:14:31 crc kubenswrapper[4886]: I0219 21:14:31.715306 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" Feb 19 21:14:31 crc kubenswrapper[4886]: I0219 21:14:31.716670 4886 generic.go:334] "Generic (PLEG): container finished" podID="eb3244f8-5100-4bb4-8e89-814b4b311240" containerID="ab9d28d77e5248cb52993e7b63d0ce0d825fb8a2a5b92ebb6d087420bdd17b18" exitCode=0 Feb 19 21:14:31 crc kubenswrapper[4886]: I0219 21:14:31.716723 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdc2x" event={"ID":"eb3244f8-5100-4bb4-8e89-814b4b311240","Type":"ContainerDied","Data":"ab9d28d77e5248cb52993e7b63d0ce0d825fb8a2a5b92ebb6d087420bdd17b18"} Feb 19 21:14:31 crc kubenswrapper[4886]: I0219 21:14:31.716747 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdc2x" event={"ID":"eb3244f8-5100-4bb4-8e89-814b4b311240","Type":"ContainerStarted","Data":"2f60c8c59701254403797af3568d5a17595fd7afcc9e8baf51f5adfd7eefe5eb"} Feb 19 21:14:31 crc kubenswrapper[4886]: I0219 21:14:31.736441 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" podStartSLOduration=1.5526367730000001 podStartE2EDuration="6.736422602s" podCreationTimestamp="2026-02-19 21:14:25 +0000 UTC" firstStartedPulling="2026-02-19 21:14:26.022416587 +0000 UTC m=+896.650259637" lastFinishedPulling="2026-02-19 21:14:31.206202416 +0000 UTC m=+901.834045466" observedRunningTime="2026-02-19 21:14:31.734403842 +0000 UTC m=+902.362246902" watchObservedRunningTime="2026-02-19 21:14:31.736422602 +0000 UTC m=+902.364265662" Feb 19 21:14:33 crc kubenswrapper[4886]: I0219 21:14:33.730971 4886 generic.go:334] "Generic (PLEG): container finished" podID="eb3244f8-5100-4bb4-8e89-814b4b311240" containerID="7866c32d76f046ab9377bdfff584f7f568cd689d36868f6d6f3eea36abe1dc9a" exitCode=0 Feb 19 21:14:33 crc kubenswrapper[4886]: I0219 21:14:33.731037 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdc2x" event={"ID":"eb3244f8-5100-4bb4-8e89-814b4b311240","Type":"ContainerDied","Data":"7866c32d76f046ab9377bdfff584f7f568cd689d36868f6d6f3eea36abe1dc9a"} Feb 19 21:14:34 crc kubenswrapper[4886]: I0219 21:14:34.739246 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdc2x" event={"ID":"eb3244f8-5100-4bb4-8e89-814b4b311240","Type":"ContainerStarted","Data":"bc8ebe4a2df1003e954809973a89de62c16b4e8eed30578dd790865f678e1e10"} Feb 19 21:14:34 crc kubenswrapper[4886]: I0219 21:14:34.773993 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bdc2x" podStartSLOduration=3.371643716 podStartE2EDuration="5.773971705s" podCreationTimestamp="2026-02-19 21:14:29 +0000 UTC" firstStartedPulling="2026-02-19 21:14:31.718031595 +0000 UTC m=+902.345874645" lastFinishedPulling="2026-02-19 21:14:34.120359574 +0000 UTC m=+904.748202634" observedRunningTime="2026-02-19 21:14:34.768897926 +0000 UTC m=+905.396740976" watchObservedRunningTime="2026-02-19 21:14:34.773971705 +0000 UTC m=+905.401814775" Feb 19 21:14:39 crc kubenswrapper[4886]: I0219 21:14:39.558268 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bdc2x" Feb 19 21:14:39 crc kubenswrapper[4886]: I0219 21:14:39.558777 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bdc2x" Feb 19 21:14:39 crc kubenswrapper[4886]: I0219 21:14:39.607593 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bdc2x" Feb 19 21:14:39 crc kubenswrapper[4886]: I0219 21:14:39.793789 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d5tw7"] Feb 19 21:14:39 crc kubenswrapper[4886]: I0219 21:14:39.795518 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d5tw7" Feb 19 21:14:39 crc kubenswrapper[4886]: I0219 21:14:39.814008 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5tw7"] Feb 19 21:14:39 crc kubenswrapper[4886]: I0219 21:14:39.842970 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bdc2x" Feb 19 21:14:39 crc kubenswrapper[4886]: I0219 21:14:39.875785 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9bgb\" (UniqueName: \"kubernetes.io/projected/f6718b8d-f2ef-4c47-8239-17ef650a27a7-kube-api-access-p9bgb\") pod \"redhat-marketplace-d5tw7\" (UID: \"f6718b8d-f2ef-4c47-8239-17ef650a27a7\") " pod="openshift-marketplace/redhat-marketplace-d5tw7" Feb 19 21:14:39 crc kubenswrapper[4886]: I0219 21:14:39.875851 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6718b8d-f2ef-4c47-8239-17ef650a27a7-utilities\") pod \"redhat-marketplace-d5tw7\" (UID: \"f6718b8d-f2ef-4c47-8239-17ef650a27a7\") " pod="openshift-marketplace/redhat-marketplace-d5tw7" Feb 19 21:14:39 crc kubenswrapper[4886]: I0219 21:14:39.875890 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6718b8d-f2ef-4c47-8239-17ef650a27a7-catalog-content\") pod \"redhat-marketplace-d5tw7\" (UID: \"f6718b8d-f2ef-4c47-8239-17ef650a27a7\") " pod="openshift-marketplace/redhat-marketplace-d5tw7" Feb 19 21:14:39 crc kubenswrapper[4886]: I0219 21:14:39.977811 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9bgb\" (UniqueName: \"kubernetes.io/projected/f6718b8d-f2ef-4c47-8239-17ef650a27a7-kube-api-access-p9bgb\") pod \"redhat-marketplace-d5tw7\" (UID: \"f6718b8d-f2ef-4c47-8239-17ef650a27a7\") " pod="openshift-marketplace/redhat-marketplace-d5tw7" Feb 19 21:14:39 crc kubenswrapper[4886]: I0219 21:14:39.977869 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6718b8d-f2ef-4c47-8239-17ef650a27a7-utilities\") pod \"redhat-marketplace-d5tw7\" (UID: \"f6718b8d-f2ef-4c47-8239-17ef650a27a7\") " pod="openshift-marketplace/redhat-marketplace-d5tw7" Feb 19 21:14:39 crc kubenswrapper[4886]: I0219 21:14:39.977909 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6718b8d-f2ef-4c47-8239-17ef650a27a7-catalog-content\") pod \"redhat-marketplace-d5tw7\" (UID: \"f6718b8d-f2ef-4c47-8239-17ef650a27a7\") " pod="openshift-marketplace/redhat-marketplace-d5tw7" Feb 19 21:14:39 crc kubenswrapper[4886]: I0219 21:14:39.978571 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6718b8d-f2ef-4c47-8239-17ef650a27a7-utilities\") pod \"redhat-marketplace-d5tw7\" (UID: \"f6718b8d-f2ef-4c47-8239-17ef650a27a7\") " pod="openshift-marketplace/redhat-marketplace-d5tw7" Feb 19 21:14:39 crc kubenswrapper[4886]: I0219 21:14:39.978611 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6718b8d-f2ef-4c47-8239-17ef650a27a7-catalog-content\") pod \"redhat-marketplace-d5tw7\" (UID: \"f6718b8d-f2ef-4c47-8239-17ef650a27a7\") " pod="openshift-marketplace/redhat-marketplace-d5tw7" Feb 19 21:14:40 crc kubenswrapper[4886]: I0219 21:14:40.001762 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9bgb\" (UniqueName: \"kubernetes.io/projected/f6718b8d-f2ef-4c47-8239-17ef650a27a7-kube-api-access-p9bgb\") pod \"redhat-marketplace-d5tw7\" (UID: \"f6718b8d-f2ef-4c47-8239-17ef650a27a7\") " pod="openshift-marketplace/redhat-marketplace-d5tw7" Feb 19 21:14:40 crc kubenswrapper[4886]: I0219 21:14:40.119590 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d5tw7" Feb 19 21:14:40 crc kubenswrapper[4886]: I0219 21:14:40.658397 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5tw7"] Feb 19 21:14:40 crc kubenswrapper[4886]: I0219 21:14:40.788215 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5tw7" event={"ID":"f6718b8d-f2ef-4c47-8239-17ef650a27a7","Type":"ContainerStarted","Data":"26a6f8af8021fa724e4bcc75fd365f6302d8dd4b22370c87c78a74de5c846d93"} Feb 19 21:14:41 crc kubenswrapper[4886]: I0219 21:14:41.796089 4886 generic.go:334] "Generic (PLEG): container finished" podID="f6718b8d-f2ef-4c47-8239-17ef650a27a7" containerID="ebe3bf67ca0c4467b0bbd8eb19d1a6e7a45daf480ae1a31cc6a65e30333e0a0d" exitCode=0 Feb 19 21:14:41 crc kubenswrapper[4886]: I0219 21:14:41.796172 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5tw7" event={"ID":"f6718b8d-f2ef-4c47-8239-17ef650a27a7","Type":"ContainerDied","Data":"ebe3bf67ca0c4467b0bbd8eb19d1a6e7a45daf480ae1a31cc6a65e30333e0a0d"} Feb 19 21:14:42 crc kubenswrapper[4886]: I0219 21:14:42.782184 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bdc2x"] Feb 19 21:14:42 crc kubenswrapper[4886]: I0219 21:14:42.782901 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bdc2x" podUID="eb3244f8-5100-4bb4-8e89-814b4b311240" containerName="registry-server" containerID="cri-o://bc8ebe4a2df1003e954809973a89de62c16b4e8eed30578dd790865f678e1e10" gracePeriod=2 Feb 19 21:14:42 crc kubenswrapper[4886]: I0219 21:14:42.806440 4886 generic.go:334] "Generic (PLEG): container finished" podID="f6718b8d-f2ef-4c47-8239-17ef650a27a7" containerID="127b854418d75e1ba53f3aa302e8b86b32076e184dd2937a8329bed4659e8493" exitCode=0 Feb 19 21:14:42 crc kubenswrapper[4886]: I0219 21:14:42.806482 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5tw7" event={"ID":"f6718b8d-f2ef-4c47-8239-17ef650a27a7","Type":"ContainerDied","Data":"127b854418d75e1ba53f3aa302e8b86b32076e184dd2937a8329bed4659e8493"} Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.002230 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jjxqw"] Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.021741 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jjxqw" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.023966 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jjxqw"] Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.135811 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hktw8\" (UniqueName: \"kubernetes.io/projected/ab2b33f1-624b-4f9c-bb9c-0362781b2c44-kube-api-access-hktw8\") pod \"community-operators-jjxqw\" (UID: \"ab2b33f1-624b-4f9c-bb9c-0362781b2c44\") " pod="openshift-marketplace/community-operators-jjxqw" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.135876 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab2b33f1-624b-4f9c-bb9c-0362781b2c44-catalog-content\") pod \"community-operators-jjxqw\" (UID: \"ab2b33f1-624b-4f9c-bb9c-0362781b2c44\") " pod="openshift-marketplace/community-operators-jjxqw" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.135961 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab2b33f1-624b-4f9c-bb9c-0362781b2c44-utilities\") pod \"community-operators-jjxqw\" (UID: \"ab2b33f1-624b-4f9c-bb9c-0362781b2c44\") " pod="openshift-marketplace/community-operators-jjxqw" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.237729 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hktw8\" (UniqueName: \"kubernetes.io/projected/ab2b33f1-624b-4f9c-bb9c-0362781b2c44-kube-api-access-hktw8\") pod \"community-operators-jjxqw\" (UID: \"ab2b33f1-624b-4f9c-bb9c-0362781b2c44\") " pod="openshift-marketplace/community-operators-jjxqw" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.237790 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab2b33f1-624b-4f9c-bb9c-0362781b2c44-catalog-content\") pod \"community-operators-jjxqw\" (UID: \"ab2b33f1-624b-4f9c-bb9c-0362781b2c44\") " pod="openshift-marketplace/community-operators-jjxqw" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.237866 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab2b33f1-624b-4f9c-bb9c-0362781b2c44-utilities\") pod \"community-operators-jjxqw\" (UID: \"ab2b33f1-624b-4f9c-bb9c-0362781b2c44\") " pod="openshift-marketplace/community-operators-jjxqw" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.238370 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab2b33f1-624b-4f9c-bb9c-0362781b2c44-utilities\") pod \"community-operators-jjxqw\" (UID: \"ab2b33f1-624b-4f9c-bb9c-0362781b2c44\") " pod="openshift-marketplace/community-operators-jjxqw" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.238611 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab2b33f1-624b-4f9c-bb9c-0362781b2c44-catalog-content\") pod \"community-operators-jjxqw\" (UID: \"ab2b33f1-624b-4f9c-bb9c-0362781b2c44\") " pod="openshift-marketplace/community-operators-jjxqw" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.257912 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hktw8\" (UniqueName: \"kubernetes.io/projected/ab2b33f1-624b-4f9c-bb9c-0362781b2c44-kube-api-access-hktw8\") pod \"community-operators-jjxqw\" (UID: \"ab2b33f1-624b-4f9c-bb9c-0362781b2c44\") " pod="openshift-marketplace/community-operators-jjxqw" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.311096 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bdc2x" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.346100 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jjxqw" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.439963 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwbqr\" (UniqueName: \"kubernetes.io/projected/eb3244f8-5100-4bb4-8e89-814b4b311240-kube-api-access-hwbqr\") pod \"eb3244f8-5100-4bb4-8e89-814b4b311240\" (UID: \"eb3244f8-5100-4bb4-8e89-814b4b311240\") " Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.440000 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb3244f8-5100-4bb4-8e89-814b4b311240-utilities\") pod \"eb3244f8-5100-4bb4-8e89-814b4b311240\" (UID: \"eb3244f8-5100-4bb4-8e89-814b4b311240\") " Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.440172 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb3244f8-5100-4bb4-8e89-814b4b311240-catalog-content\") pod \"eb3244f8-5100-4bb4-8e89-814b4b311240\" (UID: \"eb3244f8-5100-4bb4-8e89-814b4b311240\") " Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.444073 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb3244f8-5100-4bb4-8e89-814b4b311240-utilities" (OuterVolumeSpecName: "utilities") pod "eb3244f8-5100-4bb4-8e89-814b4b311240" (UID: "eb3244f8-5100-4bb4-8e89-814b4b311240"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.447460 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb3244f8-5100-4bb4-8e89-814b4b311240-kube-api-access-hwbqr" (OuterVolumeSpecName: "kube-api-access-hwbqr") pod "eb3244f8-5100-4bb4-8e89-814b4b311240" (UID: "eb3244f8-5100-4bb4-8e89-814b4b311240"). InnerVolumeSpecName "kube-api-access-hwbqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.513353 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb3244f8-5100-4bb4-8e89-814b4b311240-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb3244f8-5100-4bb4-8e89-814b4b311240" (UID: "eb3244f8-5100-4bb4-8e89-814b4b311240"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.541940 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwbqr\" (UniqueName: \"kubernetes.io/projected/eb3244f8-5100-4bb4-8e89-814b4b311240-kube-api-access-hwbqr\") on node \"crc\" DevicePath \"\"" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.542255 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb3244f8-5100-4bb4-8e89-814b4b311240-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.542281 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb3244f8-5100-4bb4-8e89-814b4b311240-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.815414 4886 generic.go:334] "Generic (PLEG): container finished" podID="eb3244f8-5100-4bb4-8e89-814b4b311240" containerID="bc8ebe4a2df1003e954809973a89de62c16b4e8eed30578dd790865f678e1e10" exitCode=0 Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.815488 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bdc2x" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.815493 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdc2x" event={"ID":"eb3244f8-5100-4bb4-8e89-814b4b311240","Type":"ContainerDied","Data":"bc8ebe4a2df1003e954809973a89de62c16b4e8eed30578dd790865f678e1e10"} Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.815593 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdc2x" event={"ID":"eb3244f8-5100-4bb4-8e89-814b4b311240","Type":"ContainerDied","Data":"2f60c8c59701254403797af3568d5a17595fd7afcc9e8baf51f5adfd7eefe5eb"} Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.815615 4886 scope.go:117] "RemoveContainer" containerID="bc8ebe4a2df1003e954809973a89de62c16b4e8eed30578dd790865f678e1e10" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.819179 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5tw7" event={"ID":"f6718b8d-f2ef-4c47-8239-17ef650a27a7","Type":"ContainerStarted","Data":"3236674c4a6df86421b2c88448cbaeb9e25533c71ac23ac087d99f82a506ebf9"} Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.834294 4886 scope.go:117] "RemoveContainer" containerID="7866c32d76f046ab9377bdfff584f7f568cd689d36868f6d6f3eea36abe1dc9a" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.843684 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d5tw7" podStartSLOduration=3.43509911 podStartE2EDuration="4.843670039s" podCreationTimestamp="2026-02-19 21:14:39 +0000 UTC" firstStartedPulling="2026-02-19 21:14:41.798174442 +0000 UTC m=+912.426017502" lastFinishedPulling="2026-02-19 21:14:43.206745381 +0000 UTC m=+913.834588431" observedRunningTime="2026-02-19 21:14:43.841581535 +0000 UTC m=+914.469424585" watchObservedRunningTime="2026-02-19 21:14:43.843670039 +0000 UTC m=+914.471513089" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.859837 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bdc2x"] Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.863228 4886 scope.go:117] "RemoveContainer" containerID="ab9d28d77e5248cb52993e7b63d0ce0d825fb8a2a5b92ebb6d087420bdd17b18" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.870170 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bdc2x"] Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.875563 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jjxqw"] Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.879305 4886 scope.go:117] "RemoveContainer" containerID="bc8ebe4a2df1003e954809973a89de62c16b4e8eed30578dd790865f678e1e10" Feb 19 21:14:43 crc kubenswrapper[4886]: E0219 21:14:43.879820 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc8ebe4a2df1003e954809973a89de62c16b4e8eed30578dd790865f678e1e10\": container with ID starting with bc8ebe4a2df1003e954809973a89de62c16b4e8eed30578dd790865f678e1e10 not found: ID does not exist" containerID="bc8ebe4a2df1003e954809973a89de62c16b4e8eed30578dd790865f678e1e10" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.879858 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc8ebe4a2df1003e954809973a89de62c16b4e8eed30578dd790865f678e1e10"} err="failed to get container status \"bc8ebe4a2df1003e954809973a89de62c16b4e8eed30578dd790865f678e1e10\": rpc error: code = NotFound desc = could not find container \"bc8ebe4a2df1003e954809973a89de62c16b4e8eed30578dd790865f678e1e10\": container with ID starting with bc8ebe4a2df1003e954809973a89de62c16b4e8eed30578dd790865f678e1e10 not found: ID does not exist" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.879900 4886 scope.go:117] "RemoveContainer" containerID="7866c32d76f046ab9377bdfff584f7f568cd689d36868f6d6f3eea36abe1dc9a" Feb 19 21:14:43 crc kubenswrapper[4886]: E0219 21:14:43.880254 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7866c32d76f046ab9377bdfff584f7f568cd689d36868f6d6f3eea36abe1dc9a\": container with ID starting with 7866c32d76f046ab9377bdfff584f7f568cd689d36868f6d6f3eea36abe1dc9a not found: ID does not exist" containerID="7866c32d76f046ab9377bdfff584f7f568cd689d36868f6d6f3eea36abe1dc9a" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.880288 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7866c32d76f046ab9377bdfff584f7f568cd689d36868f6d6f3eea36abe1dc9a"} err="failed to get container status \"7866c32d76f046ab9377bdfff584f7f568cd689d36868f6d6f3eea36abe1dc9a\": rpc error: code = NotFound desc = could not find container \"7866c32d76f046ab9377bdfff584f7f568cd689d36868f6d6f3eea36abe1dc9a\": container with ID starting with 7866c32d76f046ab9377bdfff584f7f568cd689d36868f6d6f3eea36abe1dc9a not found: ID does not exist" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.880300 4886 scope.go:117] "RemoveContainer" containerID="ab9d28d77e5248cb52993e7b63d0ce0d825fb8a2a5b92ebb6d087420bdd17b18" Feb 19 21:14:43 crc kubenswrapper[4886]: E0219 21:14:43.880616 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab9d28d77e5248cb52993e7b63d0ce0d825fb8a2a5b92ebb6d087420bdd17b18\": container with ID starting with ab9d28d77e5248cb52993e7b63d0ce0d825fb8a2a5b92ebb6d087420bdd17b18 not found: ID does not exist" containerID="ab9d28d77e5248cb52993e7b63d0ce0d825fb8a2a5b92ebb6d087420bdd17b18" Feb 19 21:14:43 crc kubenswrapper[4886]: I0219 21:14:43.880652 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab9d28d77e5248cb52993e7b63d0ce0d825fb8a2a5b92ebb6d087420bdd17b18"} err="failed to get container status \"ab9d28d77e5248cb52993e7b63d0ce0d825fb8a2a5b92ebb6d087420bdd17b18\": rpc error: code = NotFound desc = could not find container \"ab9d28d77e5248cb52993e7b63d0ce0d825fb8a2a5b92ebb6d087420bdd17b18\": container with ID starting with ab9d28d77e5248cb52993e7b63d0ce0d825fb8a2a5b92ebb6d087420bdd17b18 not found: ID does not exist" Feb 19 21:14:44 crc kubenswrapper[4886]: I0219 21:14:44.615021 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb3244f8-5100-4bb4-8e89-814b4b311240" path="/var/lib/kubelet/pods/eb3244f8-5100-4bb4-8e89-814b4b311240/volumes" Feb 19 21:14:44 crc kubenswrapper[4886]: I0219 21:14:44.830833 4886 generic.go:334] "Generic (PLEG): container finished" podID="ab2b33f1-624b-4f9c-bb9c-0362781b2c44" containerID="a3e25a53eaf53261f421d20596182a5ed5b2196f039d7956eb09a7066ed48d6c" exitCode=0 Feb 19 21:14:44 crc kubenswrapper[4886]: I0219 21:14:44.830949 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjxqw" event={"ID":"ab2b33f1-624b-4f9c-bb9c-0362781b2c44","Type":"ContainerDied","Data":"a3e25a53eaf53261f421d20596182a5ed5b2196f039d7956eb09a7066ed48d6c"} Feb 19 21:14:44 crc kubenswrapper[4886]: I0219 21:14:44.831006 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjxqw" event={"ID":"ab2b33f1-624b-4f9c-bb9c-0362781b2c44","Type":"ContainerStarted","Data":"d0488b5a2fb0bfc3339e6f46a3602d4961451282743195a4698746c27d029bef"} Feb 19 21:14:45 crc kubenswrapper[4886]: I0219 21:14:45.541978 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" Feb 19 21:14:45 crc kubenswrapper[4886]: I0219 21:14:45.842953 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjxqw" event={"ID":"ab2b33f1-624b-4f9c-bb9c-0362781b2c44","Type":"ContainerStarted","Data":"a7b621851f0e63c4d8d6ce95633b967e91e553e9e6937cf4527f76e3cbe5f2ad"} Feb 19 21:14:46 crc kubenswrapper[4886]: I0219 21:14:46.851298 4886 generic.go:334] "Generic (PLEG): container finished" podID="ab2b33f1-624b-4f9c-bb9c-0362781b2c44" containerID="a7b621851f0e63c4d8d6ce95633b967e91e553e9e6937cf4527f76e3cbe5f2ad" exitCode=0 Feb 19 21:14:46 crc kubenswrapper[4886]: I0219 21:14:46.851347 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjxqw" event={"ID":"ab2b33f1-624b-4f9c-bb9c-0362781b2c44","Type":"ContainerDied","Data":"a7b621851f0e63c4d8d6ce95633b967e91e553e9e6937cf4527f76e3cbe5f2ad"} Feb 19 21:14:47 crc kubenswrapper[4886]: I0219 21:14:47.861867 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjxqw" event={"ID":"ab2b33f1-624b-4f9c-bb9c-0362781b2c44","Type":"ContainerStarted","Data":"4e222466e6561df7f547563808edf53b48c8866a910aca4e0e649277ae61c54a"} Feb 19 21:14:47 crc kubenswrapper[4886]: I0219 21:14:47.895714 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jjxqw" podStartSLOduration=3.456083068 podStartE2EDuration="5.895693312s" podCreationTimestamp="2026-02-19 21:14:42 +0000 UTC" firstStartedPulling="2026-02-19 21:14:44.833473228 +0000 UTC m=+915.461316288" lastFinishedPulling="2026-02-19 21:14:47.273083472 +0000 UTC m=+917.900926532" observedRunningTime="2026-02-19 21:14:47.888100719 +0000 UTC m=+918.515943779" watchObservedRunningTime="2026-02-19 21:14:47.895693312 +0000 UTC m=+918.523536362" Feb 19 21:14:48 crc kubenswrapper[4886]: I0219 21:14:48.325346 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:14:48 crc kubenswrapper[4886]: I0219 21:14:48.325729 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:14:50 crc kubenswrapper[4886]: I0219 21:14:50.120722 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d5tw7" Feb 19 21:14:50 crc kubenswrapper[4886]: I0219 21:14:50.121058 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d5tw7" Feb 19 21:14:50 crc kubenswrapper[4886]: I0219 21:14:50.180241 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d5tw7" Feb 19 21:14:50 crc kubenswrapper[4886]: I0219 21:14:50.943623 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d5tw7" Feb 19 21:14:51 crc kubenswrapper[4886]: I0219 21:14:51.792091 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5tw7"] Feb 19 21:14:52 crc kubenswrapper[4886]: I0219 21:14:52.899469 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d5tw7" podUID="f6718b8d-f2ef-4c47-8239-17ef650a27a7" containerName="registry-server" containerID="cri-o://3236674c4a6df86421b2c88448cbaeb9e25533c71ac23ac087d99f82a506ebf9" gracePeriod=2 Feb 19 21:14:53 crc kubenswrapper[4886]: I0219 21:14:53.346705 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jjxqw" Feb 19 21:14:53 crc kubenswrapper[4886]: I0219 21:14:53.346973 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jjxqw" Feb 19 21:14:53 crc kubenswrapper[4886]: I0219 21:14:53.348608 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d5tw7" Feb 19 21:14:53 crc kubenswrapper[4886]: I0219 21:14:53.396495 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9bgb\" (UniqueName: \"kubernetes.io/projected/f6718b8d-f2ef-4c47-8239-17ef650a27a7-kube-api-access-p9bgb\") pod \"f6718b8d-f2ef-4c47-8239-17ef650a27a7\" (UID: \"f6718b8d-f2ef-4c47-8239-17ef650a27a7\") " Feb 19 21:14:53 crc kubenswrapper[4886]: I0219 21:14:53.396799 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6718b8d-f2ef-4c47-8239-17ef650a27a7-catalog-content\") pod \"f6718b8d-f2ef-4c47-8239-17ef650a27a7\" (UID: \"f6718b8d-f2ef-4c47-8239-17ef650a27a7\") " Feb 19 21:14:53 crc kubenswrapper[4886]: I0219 21:14:53.396982 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6718b8d-f2ef-4c47-8239-17ef650a27a7-utilities\") pod \"f6718b8d-f2ef-4c47-8239-17ef650a27a7\" (UID: \"f6718b8d-f2ef-4c47-8239-17ef650a27a7\") " Feb 19 21:14:53 crc kubenswrapper[4886]: I0219 21:14:53.398978 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6718b8d-f2ef-4c47-8239-17ef650a27a7-utilities" (OuterVolumeSpecName: "utilities") pod "f6718b8d-f2ef-4c47-8239-17ef650a27a7" (UID: "f6718b8d-f2ef-4c47-8239-17ef650a27a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:14:53 crc kubenswrapper[4886]: I0219 21:14:53.402449 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6718b8d-f2ef-4c47-8239-17ef650a27a7-kube-api-access-p9bgb" (OuterVolumeSpecName: "kube-api-access-p9bgb") pod "f6718b8d-f2ef-4c47-8239-17ef650a27a7" (UID: "f6718b8d-f2ef-4c47-8239-17ef650a27a7"). InnerVolumeSpecName "kube-api-access-p9bgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:14:53 crc kubenswrapper[4886]: I0219 21:14:53.403536 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jjxqw" Feb 19 21:14:53 crc kubenswrapper[4886]: I0219 21:14:53.500015 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9bgb\" (UniqueName: \"kubernetes.io/projected/f6718b8d-f2ef-4c47-8239-17ef650a27a7-kube-api-access-p9bgb\") on node \"crc\" DevicePath \"\"" Feb 19 21:14:53 crc kubenswrapper[4886]: I0219 21:14:53.500055 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6718b8d-f2ef-4c47-8239-17ef650a27a7-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:14:53 crc kubenswrapper[4886]: I0219 21:14:53.911814 4886 generic.go:334] "Generic (PLEG): container finished" podID="f6718b8d-f2ef-4c47-8239-17ef650a27a7" containerID="3236674c4a6df86421b2c88448cbaeb9e25533c71ac23ac087d99f82a506ebf9" exitCode=0 Feb 19 21:14:53 crc kubenswrapper[4886]: I0219 21:14:53.911910 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d5tw7" Feb 19 21:14:53 crc kubenswrapper[4886]: I0219 21:14:53.911975 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5tw7" event={"ID":"f6718b8d-f2ef-4c47-8239-17ef650a27a7","Type":"ContainerDied","Data":"3236674c4a6df86421b2c88448cbaeb9e25533c71ac23ac087d99f82a506ebf9"} Feb 19 21:14:53 crc kubenswrapper[4886]: I0219 21:14:53.912024 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5tw7" event={"ID":"f6718b8d-f2ef-4c47-8239-17ef650a27a7","Type":"ContainerDied","Data":"26a6f8af8021fa724e4bcc75fd365f6302d8dd4b22370c87c78a74de5c846d93"} Feb 19 21:14:53 crc kubenswrapper[4886]: I0219 21:14:53.912053 4886 scope.go:117] "RemoveContainer" containerID="3236674c4a6df86421b2c88448cbaeb9e25533c71ac23ac087d99f82a506ebf9" Feb 19 21:14:53 crc kubenswrapper[4886]: I0219 21:14:53.934062 4886 scope.go:117] "RemoveContainer" containerID="127b854418d75e1ba53f3aa302e8b86b32076e184dd2937a8329bed4659e8493" Feb 19 21:14:53 crc kubenswrapper[4886]: I0219 21:14:53.958676 4886 scope.go:117] "RemoveContainer" containerID="ebe3bf67ca0c4467b0bbd8eb19d1a6e7a45daf480ae1a31cc6a65e30333e0a0d" Feb 19 21:14:54 crc kubenswrapper[4886]: I0219 21:14:54.001865 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jjxqw" Feb 19 21:14:54 crc kubenswrapper[4886]: I0219 21:14:54.009132 4886 scope.go:117] "RemoveContainer" containerID="3236674c4a6df86421b2c88448cbaeb9e25533c71ac23ac087d99f82a506ebf9" Feb 19 21:14:54 crc kubenswrapper[4886]: E0219 21:14:54.009657 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3236674c4a6df86421b2c88448cbaeb9e25533c71ac23ac087d99f82a506ebf9\": container with ID starting with 3236674c4a6df86421b2c88448cbaeb9e25533c71ac23ac087d99f82a506ebf9 not found: ID does not exist" containerID="3236674c4a6df86421b2c88448cbaeb9e25533c71ac23ac087d99f82a506ebf9" Feb 19 21:14:54 crc kubenswrapper[4886]: I0219 21:14:54.009705 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3236674c4a6df86421b2c88448cbaeb9e25533c71ac23ac087d99f82a506ebf9"} err="failed to get container status \"3236674c4a6df86421b2c88448cbaeb9e25533c71ac23ac087d99f82a506ebf9\": rpc error: code = NotFound desc = could not find container \"3236674c4a6df86421b2c88448cbaeb9e25533c71ac23ac087d99f82a506ebf9\": container with ID starting with 3236674c4a6df86421b2c88448cbaeb9e25533c71ac23ac087d99f82a506ebf9 not found: ID does not exist" Feb 19 21:14:54 crc kubenswrapper[4886]: I0219 21:14:54.009735 4886 scope.go:117] "RemoveContainer" containerID="127b854418d75e1ba53f3aa302e8b86b32076e184dd2937a8329bed4659e8493" Feb 19 21:14:54 crc kubenswrapper[4886]: E0219 21:14:54.010176 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"127b854418d75e1ba53f3aa302e8b86b32076e184dd2937a8329bed4659e8493\": container with ID starting with 127b854418d75e1ba53f3aa302e8b86b32076e184dd2937a8329bed4659e8493 not found: ID does not exist" containerID="127b854418d75e1ba53f3aa302e8b86b32076e184dd2937a8329bed4659e8493" Feb 19 21:14:54 crc kubenswrapper[4886]: I0219 21:14:54.010255 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"127b854418d75e1ba53f3aa302e8b86b32076e184dd2937a8329bed4659e8493"} err="failed to get container status \"127b854418d75e1ba53f3aa302e8b86b32076e184dd2937a8329bed4659e8493\": rpc error: code = NotFound desc = could not find container \"127b854418d75e1ba53f3aa302e8b86b32076e184dd2937a8329bed4659e8493\": container with ID starting with 127b854418d75e1ba53f3aa302e8b86b32076e184dd2937a8329bed4659e8493 not found: ID does not exist" Feb 19 21:14:54 crc kubenswrapper[4886]: I0219 21:14:54.010416 4886 scope.go:117] "RemoveContainer" containerID="ebe3bf67ca0c4467b0bbd8eb19d1a6e7a45daf480ae1a31cc6a65e30333e0a0d" Feb 19 21:14:54 crc kubenswrapper[4886]: E0219 21:14:54.010829 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebe3bf67ca0c4467b0bbd8eb19d1a6e7a45daf480ae1a31cc6a65e30333e0a0d\": container with ID starting with ebe3bf67ca0c4467b0bbd8eb19d1a6e7a45daf480ae1a31cc6a65e30333e0a0d not found: ID does not exist" containerID="ebe3bf67ca0c4467b0bbd8eb19d1a6e7a45daf480ae1a31cc6a65e30333e0a0d" Feb 19 21:14:54 crc kubenswrapper[4886]: I0219 21:14:54.010887 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe3bf67ca0c4467b0bbd8eb19d1a6e7a45daf480ae1a31cc6a65e30333e0a0d"} err="failed to get container status \"ebe3bf67ca0c4467b0bbd8eb19d1a6e7a45daf480ae1a31cc6a65e30333e0a0d\": rpc error: code = NotFound desc = could not find container \"ebe3bf67ca0c4467b0bbd8eb19d1a6e7a45daf480ae1a31cc6a65e30333e0a0d\": container with ID starting with ebe3bf67ca0c4467b0bbd8eb19d1a6e7a45daf480ae1a31cc6a65e30333e0a0d not found: ID does not exist" Feb 19 21:14:54 crc kubenswrapper[4886]: I0219 21:14:54.167634 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6718b8d-f2ef-4c47-8239-17ef650a27a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6718b8d-f2ef-4c47-8239-17ef650a27a7" (UID: "f6718b8d-f2ef-4c47-8239-17ef650a27a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:14:54 crc kubenswrapper[4886]: I0219 21:14:54.208929 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6718b8d-f2ef-4c47-8239-17ef650a27a7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:14:54 crc kubenswrapper[4886]: I0219 21:14:54.239176 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5tw7"] Feb 19 21:14:54 crc kubenswrapper[4886]: I0219 21:14:54.244478 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5tw7"] Feb 19 21:14:54 crc kubenswrapper[4886]: I0219 21:14:54.615555 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6718b8d-f2ef-4c47-8239-17ef650a27a7" path="/var/lib/kubelet/pods/f6718b8d-f2ef-4c47-8239-17ef650a27a7/volumes" Feb 19 21:14:55 crc kubenswrapper[4886]: I0219 21:14:55.793064 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jjxqw"] Feb 19 21:14:55 crc kubenswrapper[4886]: I0219 21:14:55.930524 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jjxqw" podUID="ab2b33f1-624b-4f9c-bb9c-0362781b2c44" containerName="registry-server" containerID="cri-o://4e222466e6561df7f547563808edf53b48c8866a910aca4e0e649277ae61c54a" gracePeriod=2 Feb 19 21:14:56 crc kubenswrapper[4886]: I0219 21:14:56.357185 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jjxqw" Feb 19 21:14:56 crc kubenswrapper[4886]: I0219 21:14:56.555848 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab2b33f1-624b-4f9c-bb9c-0362781b2c44-catalog-content\") pod \"ab2b33f1-624b-4f9c-bb9c-0362781b2c44\" (UID: \"ab2b33f1-624b-4f9c-bb9c-0362781b2c44\") " Feb 19 21:14:56 crc kubenswrapper[4886]: I0219 21:14:56.556034 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab2b33f1-624b-4f9c-bb9c-0362781b2c44-utilities\") pod \"ab2b33f1-624b-4f9c-bb9c-0362781b2c44\" (UID: \"ab2b33f1-624b-4f9c-bb9c-0362781b2c44\") " Feb 19 21:14:56 crc kubenswrapper[4886]: I0219 21:14:56.556081 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hktw8\" (UniqueName: \"kubernetes.io/projected/ab2b33f1-624b-4f9c-bb9c-0362781b2c44-kube-api-access-hktw8\") pod \"ab2b33f1-624b-4f9c-bb9c-0362781b2c44\" (UID: \"ab2b33f1-624b-4f9c-bb9c-0362781b2c44\") " Feb 19 21:14:56 crc kubenswrapper[4886]: I0219 21:14:56.556828 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab2b33f1-624b-4f9c-bb9c-0362781b2c44-utilities" (OuterVolumeSpecName: "utilities") pod "ab2b33f1-624b-4f9c-bb9c-0362781b2c44" (UID: "ab2b33f1-624b-4f9c-bb9c-0362781b2c44"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:14:56 crc kubenswrapper[4886]: I0219 21:14:56.564599 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab2b33f1-624b-4f9c-bb9c-0362781b2c44-kube-api-access-hktw8" (OuterVolumeSpecName: "kube-api-access-hktw8") pod "ab2b33f1-624b-4f9c-bb9c-0362781b2c44" (UID: "ab2b33f1-624b-4f9c-bb9c-0362781b2c44"). InnerVolumeSpecName "kube-api-access-hktw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:14:56 crc kubenswrapper[4886]: I0219 21:14:56.677738 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab2b33f1-624b-4f9c-bb9c-0362781b2c44-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:14:56 crc kubenswrapper[4886]: I0219 21:14:56.677795 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hktw8\" (UniqueName: \"kubernetes.io/projected/ab2b33f1-624b-4f9c-bb9c-0362781b2c44-kube-api-access-hktw8\") on node \"crc\" DevicePath \"\"" Feb 19 21:14:56 crc kubenswrapper[4886]: I0219 21:14:56.941872 4886 generic.go:334] "Generic (PLEG): container finished" podID="ab2b33f1-624b-4f9c-bb9c-0362781b2c44" containerID="4e222466e6561df7f547563808edf53b48c8866a910aca4e0e649277ae61c54a" exitCode=0 Feb 19 21:14:56 crc kubenswrapper[4886]: I0219 21:14:56.941925 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jjxqw" Feb 19 21:14:56 crc kubenswrapper[4886]: I0219 21:14:56.941927 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjxqw" event={"ID":"ab2b33f1-624b-4f9c-bb9c-0362781b2c44","Type":"ContainerDied","Data":"4e222466e6561df7f547563808edf53b48c8866a910aca4e0e649277ae61c54a"} Feb 19 21:14:56 crc kubenswrapper[4886]: I0219 21:14:56.942079 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjxqw" event={"ID":"ab2b33f1-624b-4f9c-bb9c-0362781b2c44","Type":"ContainerDied","Data":"d0488b5a2fb0bfc3339e6f46a3602d4961451282743195a4698746c27d029bef"} Feb 19 21:14:56 crc kubenswrapper[4886]: I0219 21:14:56.942120 4886 scope.go:117] "RemoveContainer" containerID="4e222466e6561df7f547563808edf53b48c8866a910aca4e0e649277ae61c54a" Feb 19 21:14:56 crc kubenswrapper[4886]: I0219 21:14:56.970215 4886 scope.go:117] "RemoveContainer" containerID="a7b621851f0e63c4d8d6ce95633b967e91e553e9e6937cf4527f76e3cbe5f2ad" Feb 19 21:14:57 crc kubenswrapper[4886]: I0219 21:14:57.005468 4886 scope.go:117] "RemoveContainer" containerID="a3e25a53eaf53261f421d20596182a5ed5b2196f039d7956eb09a7066ed48d6c" Feb 19 21:14:57 crc kubenswrapper[4886]: I0219 21:14:57.049638 4886 scope.go:117] "RemoveContainer" containerID="4e222466e6561df7f547563808edf53b48c8866a910aca4e0e649277ae61c54a" Feb 19 21:14:57 crc kubenswrapper[4886]: E0219 21:14:57.050900 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e222466e6561df7f547563808edf53b48c8866a910aca4e0e649277ae61c54a\": container with ID starting with 4e222466e6561df7f547563808edf53b48c8866a910aca4e0e649277ae61c54a not found: ID does not exist" containerID="4e222466e6561df7f547563808edf53b48c8866a910aca4e0e649277ae61c54a" Feb 19 21:14:57 crc kubenswrapper[4886]: I0219 21:14:57.050937 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e222466e6561df7f547563808edf53b48c8866a910aca4e0e649277ae61c54a"} err="failed to get container status \"4e222466e6561df7f547563808edf53b48c8866a910aca4e0e649277ae61c54a\": rpc error: code = NotFound desc = could not find container \"4e222466e6561df7f547563808edf53b48c8866a910aca4e0e649277ae61c54a\": container with ID starting with 4e222466e6561df7f547563808edf53b48c8866a910aca4e0e649277ae61c54a not found: ID does not exist" Feb 19 21:14:57 crc kubenswrapper[4886]: I0219 21:14:57.050976 4886 scope.go:117] "RemoveContainer" containerID="a7b621851f0e63c4d8d6ce95633b967e91e553e9e6937cf4527f76e3cbe5f2ad" Feb 19 21:14:57 crc kubenswrapper[4886]: E0219 21:14:57.051382 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7b621851f0e63c4d8d6ce95633b967e91e553e9e6937cf4527f76e3cbe5f2ad\": container with ID starting with a7b621851f0e63c4d8d6ce95633b967e91e553e9e6937cf4527f76e3cbe5f2ad not found: ID does not exist" containerID="a7b621851f0e63c4d8d6ce95633b967e91e553e9e6937cf4527f76e3cbe5f2ad" Feb 19 21:14:57 crc kubenswrapper[4886]: I0219 21:14:57.051410 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7b621851f0e63c4d8d6ce95633b967e91e553e9e6937cf4527f76e3cbe5f2ad"} err="failed to get container status \"a7b621851f0e63c4d8d6ce95633b967e91e553e9e6937cf4527f76e3cbe5f2ad\": rpc error: code = NotFound desc = could not find container \"a7b621851f0e63c4d8d6ce95633b967e91e553e9e6937cf4527f76e3cbe5f2ad\": container with ID starting with a7b621851f0e63c4d8d6ce95633b967e91e553e9e6937cf4527f76e3cbe5f2ad not found: ID does not exist" Feb 19 21:14:57 crc kubenswrapper[4886]: I0219 21:14:57.051428 4886 scope.go:117] "RemoveContainer" containerID="a3e25a53eaf53261f421d20596182a5ed5b2196f039d7956eb09a7066ed48d6c" Feb 19 21:14:57 crc kubenswrapper[4886]: E0219 21:14:57.052418 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3e25a53eaf53261f421d20596182a5ed5b2196f039d7956eb09a7066ed48d6c\": container with ID starting with a3e25a53eaf53261f421d20596182a5ed5b2196f039d7956eb09a7066ed48d6c not found: ID does not exist" containerID="a3e25a53eaf53261f421d20596182a5ed5b2196f039d7956eb09a7066ed48d6c" Feb 19 21:14:57 crc kubenswrapper[4886]: I0219 21:14:57.052445 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3e25a53eaf53261f421d20596182a5ed5b2196f039d7956eb09a7066ed48d6c"} err="failed to get container status \"a3e25a53eaf53261f421d20596182a5ed5b2196f039d7956eb09a7066ed48d6c\": rpc error: code = NotFound desc = could not find container \"a3e25a53eaf53261f421d20596182a5ed5b2196f039d7956eb09a7066ed48d6c\": container with ID starting with a3e25a53eaf53261f421d20596182a5ed5b2196f039d7956eb09a7066ed48d6c not found: ID does not exist" Feb 19 21:14:57 crc kubenswrapper[4886]: I0219 21:14:57.265688 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab2b33f1-624b-4f9c-bb9c-0362781b2c44-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab2b33f1-624b-4f9c-bb9c-0362781b2c44" (UID: "ab2b33f1-624b-4f9c-bb9c-0362781b2c44"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:14:57 crc kubenswrapper[4886]: I0219 21:14:57.285454 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab2b33f1-624b-4f9c-bb9c-0362781b2c44-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:14:57 crc kubenswrapper[4886]: I0219 21:14:57.605636 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jjxqw"] Feb 19 21:14:57 crc kubenswrapper[4886]: I0219 21:14:57.610351 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jjxqw"] Feb 19 21:14:57 crc kubenswrapper[4886]: E0219 21:14:57.695658 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab2b33f1_624b_4f9c_bb9c_0362781b2c44.slice/crio-d0488b5a2fb0bfc3339e6f46a3602d4961451282743195a4698746c27d029bef\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab2b33f1_624b_4f9c_bb9c_0362781b2c44.slice\": RecentStats: unable to find data in memory cache]" Feb 19 21:14:58 crc kubenswrapper[4886]: I0219 21:14:58.623444 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab2b33f1-624b-4f9c-bb9c-0362781b2c44" path="/var/lib/kubelet/pods/ab2b33f1-624b-4f9c-bb9c-0362781b2c44/volumes" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.188237 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525595-9kljn"] Feb 19 21:15:00 crc kubenswrapper[4886]: E0219 21:15:00.189437 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb3244f8-5100-4bb4-8e89-814b4b311240" containerName="extract-utilities" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.189463 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb3244f8-5100-4bb4-8e89-814b4b311240" containerName="extract-utilities" Feb 19 21:15:00 crc kubenswrapper[4886]: E0219 21:15:00.189510 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6718b8d-f2ef-4c47-8239-17ef650a27a7" containerName="extract-utilities" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.189523 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6718b8d-f2ef-4c47-8239-17ef650a27a7" containerName="extract-utilities" Feb 19 21:15:00 crc kubenswrapper[4886]: E0219 21:15:00.189559 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb3244f8-5100-4bb4-8e89-814b4b311240" containerName="registry-server" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.189572 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb3244f8-5100-4bb4-8e89-814b4b311240" containerName="registry-server" Feb 19 21:15:00 crc kubenswrapper[4886]: E0219 21:15:00.189600 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2b33f1-624b-4f9c-bb9c-0362781b2c44" containerName="registry-server" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.189612 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2b33f1-624b-4f9c-bb9c-0362781b2c44" containerName="registry-server" Feb 19 21:15:00 crc kubenswrapper[4886]: E0219 21:15:00.189649 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2b33f1-624b-4f9c-bb9c-0362781b2c44" containerName="extract-content" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.189661 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2b33f1-624b-4f9c-bb9c-0362781b2c44" containerName="extract-content" Feb 19 21:15:00 crc kubenswrapper[4886]: E0219 21:15:00.189697 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6718b8d-f2ef-4c47-8239-17ef650a27a7" containerName="registry-server" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.189709 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6718b8d-f2ef-4c47-8239-17ef650a27a7" containerName="registry-server" Feb 19 21:15:00 crc kubenswrapper[4886]: E0219 21:15:00.189741 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6718b8d-f2ef-4c47-8239-17ef650a27a7" containerName="extract-content" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.189753 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6718b8d-f2ef-4c47-8239-17ef650a27a7" containerName="extract-content" Feb 19 21:15:00 crc kubenswrapper[4886]: E0219 21:15:00.189783 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb3244f8-5100-4bb4-8e89-814b4b311240" containerName="extract-content" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.189795 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb3244f8-5100-4bb4-8e89-814b4b311240" containerName="extract-content" Feb 19 21:15:00 crc kubenswrapper[4886]: E0219 21:15:00.189815 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2b33f1-624b-4f9c-bb9c-0362781b2c44" containerName="extract-utilities" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.189827 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2b33f1-624b-4f9c-bb9c-0362781b2c44" containerName="extract-utilities" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.190087 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6718b8d-f2ef-4c47-8239-17ef650a27a7" containerName="registry-server" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.190111 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab2b33f1-624b-4f9c-bb9c-0362781b2c44" containerName="registry-server" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.190145 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb3244f8-5100-4bb4-8e89-814b4b311240" containerName="registry-server" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.191108 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-9kljn" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.194180 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.198655 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525595-9kljn"] Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.199816 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.232138 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9dbcc8a-fdec-4646-b231-8f5f8bce060d-config-volume\") pod \"collect-profiles-29525595-9kljn\" (UID: \"a9dbcc8a-fdec-4646-b231-8f5f8bce060d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-9kljn" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.232382 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a9dbcc8a-fdec-4646-b231-8f5f8bce060d-secret-volume\") pod \"collect-profiles-29525595-9kljn\" (UID: \"a9dbcc8a-fdec-4646-b231-8f5f8bce060d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-9kljn" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.232436 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz6q4\" (UniqueName: \"kubernetes.io/projected/a9dbcc8a-fdec-4646-b231-8f5f8bce060d-kube-api-access-nz6q4\") pod \"collect-profiles-29525595-9kljn\" (UID: \"a9dbcc8a-fdec-4646-b231-8f5f8bce060d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-9kljn" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.334172 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a9dbcc8a-fdec-4646-b231-8f5f8bce060d-secret-volume\") pod \"collect-profiles-29525595-9kljn\" (UID: \"a9dbcc8a-fdec-4646-b231-8f5f8bce060d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-9kljn" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.334224 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz6q4\" (UniqueName: \"kubernetes.io/projected/a9dbcc8a-fdec-4646-b231-8f5f8bce060d-kube-api-access-nz6q4\") pod \"collect-profiles-29525595-9kljn\" (UID: \"a9dbcc8a-fdec-4646-b231-8f5f8bce060d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-9kljn" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.334286 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9dbcc8a-fdec-4646-b231-8f5f8bce060d-config-volume\") pod \"collect-profiles-29525595-9kljn\" (UID: \"a9dbcc8a-fdec-4646-b231-8f5f8bce060d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-9kljn" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.335595 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9dbcc8a-fdec-4646-b231-8f5f8bce060d-config-volume\") pod \"collect-profiles-29525595-9kljn\" (UID: \"a9dbcc8a-fdec-4646-b231-8f5f8bce060d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-9kljn" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.356174 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a9dbcc8a-fdec-4646-b231-8f5f8bce060d-secret-volume\") pod \"collect-profiles-29525595-9kljn\" (UID: \"a9dbcc8a-fdec-4646-b231-8f5f8bce060d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-9kljn" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.359841 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz6q4\" (UniqueName: \"kubernetes.io/projected/a9dbcc8a-fdec-4646-b231-8f5f8bce060d-kube-api-access-nz6q4\") pod \"collect-profiles-29525595-9kljn\" (UID: \"a9dbcc8a-fdec-4646-b231-8f5f8bce060d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-9kljn" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.529300 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-9kljn" Feb 19 21:15:00 crc kubenswrapper[4886]: I0219 21:15:00.973940 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525595-9kljn"] Feb 19 21:15:01 crc kubenswrapper[4886]: I0219 21:15:01.989917 4886 generic.go:334] "Generic (PLEG): container finished" podID="a9dbcc8a-fdec-4646-b231-8f5f8bce060d" containerID="088a70e9f76feb2ddd4c405ba5168f1819004407f12182a81daac0e2d86c2a3c" exitCode=0 Feb 19 21:15:01 crc kubenswrapper[4886]: I0219 21:15:01.990206 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-9kljn" event={"ID":"a9dbcc8a-fdec-4646-b231-8f5f8bce060d","Type":"ContainerDied","Data":"088a70e9f76feb2ddd4c405ba5168f1819004407f12182a81daac0e2d86c2a3c"} Feb 19 21:15:01 crc kubenswrapper[4886]: I0219 21:15:01.990238 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-9kljn" event={"ID":"a9dbcc8a-fdec-4646-b231-8f5f8bce060d","Type":"ContainerStarted","Data":"8a9e01b56faa0f1639bb3b0cb73306e88a3f4e26db6d59483704984fb5440e25"} Feb 19 21:15:03 crc kubenswrapper[4886]: I0219 21:15:03.363819 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-9kljn" Feb 19 21:15:03 crc kubenswrapper[4886]: I0219 21:15:03.400049 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a9dbcc8a-fdec-4646-b231-8f5f8bce060d-secret-volume\") pod \"a9dbcc8a-fdec-4646-b231-8f5f8bce060d\" (UID: \"a9dbcc8a-fdec-4646-b231-8f5f8bce060d\") " Feb 19 21:15:03 crc kubenswrapper[4886]: I0219 21:15:03.400162 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz6q4\" (UniqueName: \"kubernetes.io/projected/a9dbcc8a-fdec-4646-b231-8f5f8bce060d-kube-api-access-nz6q4\") pod \"a9dbcc8a-fdec-4646-b231-8f5f8bce060d\" (UID: \"a9dbcc8a-fdec-4646-b231-8f5f8bce060d\") " Feb 19 21:15:03 crc kubenswrapper[4886]: I0219 21:15:03.400338 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9dbcc8a-fdec-4646-b231-8f5f8bce060d-config-volume\") pod \"a9dbcc8a-fdec-4646-b231-8f5f8bce060d\" (UID: \"a9dbcc8a-fdec-4646-b231-8f5f8bce060d\") " Feb 19 21:15:03 crc kubenswrapper[4886]: I0219 21:15:03.401947 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9dbcc8a-fdec-4646-b231-8f5f8bce060d-config-volume" (OuterVolumeSpecName: "config-volume") pod "a9dbcc8a-fdec-4646-b231-8f5f8bce060d" (UID: "a9dbcc8a-fdec-4646-b231-8f5f8bce060d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:15:03 crc kubenswrapper[4886]: I0219 21:15:03.407109 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9dbcc8a-fdec-4646-b231-8f5f8bce060d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a9dbcc8a-fdec-4646-b231-8f5f8bce060d" (UID: "a9dbcc8a-fdec-4646-b231-8f5f8bce060d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:15:03 crc kubenswrapper[4886]: I0219 21:15:03.409819 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9dbcc8a-fdec-4646-b231-8f5f8bce060d-kube-api-access-nz6q4" (OuterVolumeSpecName: "kube-api-access-nz6q4") pod "a9dbcc8a-fdec-4646-b231-8f5f8bce060d" (UID: "a9dbcc8a-fdec-4646-b231-8f5f8bce060d"). InnerVolumeSpecName "kube-api-access-nz6q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:15:03 crc kubenswrapper[4886]: I0219 21:15:03.502372 4886 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a9dbcc8a-fdec-4646-b231-8f5f8bce060d-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 21:15:03 crc kubenswrapper[4886]: I0219 21:15:03.502417 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz6q4\" (UniqueName: \"kubernetes.io/projected/a9dbcc8a-fdec-4646-b231-8f5f8bce060d-kube-api-access-nz6q4\") on node \"crc\" DevicePath \"\"" Feb 19 21:15:03 crc kubenswrapper[4886]: I0219 21:15:03.502436 4886 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9dbcc8a-fdec-4646-b231-8f5f8bce060d-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 21:15:04 crc kubenswrapper[4886]: I0219 21:15:04.012883 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-9kljn" event={"ID":"a9dbcc8a-fdec-4646-b231-8f5f8bce060d","Type":"ContainerDied","Data":"8a9e01b56faa0f1639bb3b0cb73306e88a3f4e26db6d59483704984fb5440e25"} Feb 19 21:15:04 crc kubenswrapper[4886]: I0219 21:15:04.012955 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a9e01b56faa0f1639bb3b0cb73306e88a3f4e26db6d59483704984fb5440e25" Feb 19 21:15:04 crc kubenswrapper[4886]: I0219 21:15:04.012961 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-9kljn" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.072531 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.731072 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-x88rq"] Feb 19 21:15:05 crc kubenswrapper[4886]: E0219 21:15:05.731519 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9dbcc8a-fdec-4646-b231-8f5f8bce060d" containerName="collect-profiles" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.731542 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9dbcc8a-fdec-4646-b231-8f5f8bce060d" containerName="collect-profiles" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.731724 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9dbcc8a-fdec-4646-b231-8f5f8bce060d" containerName="collect-profiles" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.734872 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.736203 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.738463 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.740741 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-5n8d9"] Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.741648 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5n8d9" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.744718 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-dlbfv" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.746322 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.764108 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-5n8d9"] Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.832863 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-kkxcp"] Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.834231 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kkxcp" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.835913 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-wvwzr" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.835925 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.836235 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.836556 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.848521 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5-metrics-certs\") pod \"frr-k8s-x88rq\" (UID: \"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5\") " pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.848822 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5-frr-sockets\") pod \"frr-k8s-x88rq\" (UID: \"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5\") " pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.848920 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5-frr-startup\") pod \"frr-k8s-x88rq\" (UID: \"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5\") " pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.849008 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5-reloader\") pod \"frr-k8s-x88rq\" (UID: \"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5\") " pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.849089 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx6nw\" (UniqueName: \"kubernetes.io/projected/d9bf26b3-bce9-456e-a767-cd97c1160e4d-kube-api-access-qx6nw\") pod \"frr-k8s-webhook-server-78b44bf5bb-5n8d9\" (UID: \"d9bf26b3-bce9-456e-a767-cd97c1160e4d\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5n8d9" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.849376 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5-frr-conf\") pod \"frr-k8s-x88rq\" (UID: \"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5\") " pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.849536 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbzsp\" (UniqueName: \"kubernetes.io/projected/81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5-kube-api-access-lbzsp\") pod \"frr-k8s-x88rq\" (UID: \"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5\") " pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.849567 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9bf26b3-bce9-456e-a767-cd97c1160e4d-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-5n8d9\" (UID: \"d9bf26b3-bce9-456e-a767-cd97c1160e4d\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5n8d9" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.849619 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5-metrics\") pod \"frr-k8s-x88rq\" (UID: \"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5\") " pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.850698 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-p4hpl"] Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.854722 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-p4hpl" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.859060 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-p4hpl"] Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.865896 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.951255 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5-metrics\") pod \"frr-k8s-x88rq\" (UID: \"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5\") " pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.951361 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ae01887-56db-45d4-bf3e-0e66a8b3fed8-metrics-certs\") pod \"speaker-kkxcp\" (UID: \"6ae01887-56db-45d4-bf3e-0e66a8b3fed8\") " pod="metallb-system/speaker-kkxcp" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.951382 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5-metrics-certs\") pod \"frr-k8s-x88rq\" (UID: \"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5\") " pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.951406 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5-frr-sockets\") pod \"frr-k8s-x88rq\" (UID: \"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5\") " pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.951433 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5-frr-startup\") pod \"frr-k8s-x88rq\" (UID: \"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5\") " pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.951454 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8nvn\" (UniqueName: \"kubernetes.io/projected/6ae01887-56db-45d4-bf3e-0e66a8b3fed8-kube-api-access-t8nvn\") pod \"speaker-kkxcp\" (UID: \"6ae01887-56db-45d4-bf3e-0e66a8b3fed8\") " pod="metallb-system/speaker-kkxcp" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.951483 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5-reloader\") pod \"frr-k8s-x88rq\" (UID: \"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5\") " pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.951508 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx6nw\" (UniqueName: \"kubernetes.io/projected/d9bf26b3-bce9-456e-a767-cd97c1160e4d-kube-api-access-qx6nw\") pod \"frr-k8s-webhook-server-78b44bf5bb-5n8d9\" (UID: \"d9bf26b3-bce9-456e-a767-cd97c1160e4d\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5n8d9" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.951523 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5-frr-conf\") pod \"frr-k8s-x88rq\" (UID: \"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5\") " pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.951543 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6ae01887-56db-45d4-bf3e-0e66a8b3fed8-metallb-excludel2\") pod \"speaker-kkxcp\" (UID: \"6ae01887-56db-45d4-bf3e-0e66a8b3fed8\") " pod="metallb-system/speaker-kkxcp" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.951566 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6ae01887-56db-45d4-bf3e-0e66a8b3fed8-memberlist\") pod \"speaker-kkxcp\" (UID: \"6ae01887-56db-45d4-bf3e-0e66a8b3fed8\") " pod="metallb-system/speaker-kkxcp" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.951584 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbzsp\" (UniqueName: \"kubernetes.io/projected/81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5-kube-api-access-lbzsp\") pod \"frr-k8s-x88rq\" (UID: \"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5\") " pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.951599 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9bf26b3-bce9-456e-a767-cd97c1160e4d-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-5n8d9\" (UID: \"d9bf26b3-bce9-456e-a767-cd97c1160e4d\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5n8d9" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.951753 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5-metrics\") pod \"frr-k8s-x88rq\" (UID: \"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5\") " pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.952907 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5-frr-conf\") pod \"frr-k8s-x88rq\" (UID: \"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5\") " pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.953676 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5-reloader\") pod \"frr-k8s-x88rq\" (UID: \"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5\") " pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.953714 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5-frr-sockets\") pod \"frr-k8s-x88rq\" (UID: \"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5\") " pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.953766 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5-frr-startup\") pod \"frr-k8s-x88rq\" (UID: \"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5\") " pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.956720 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5-metrics-certs\") pod \"frr-k8s-x88rq\" (UID: \"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5\") " pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.956742 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9bf26b3-bce9-456e-a767-cd97c1160e4d-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-5n8d9\" (UID: \"d9bf26b3-bce9-456e-a767-cd97c1160e4d\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5n8d9" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.973243 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx6nw\" (UniqueName: \"kubernetes.io/projected/d9bf26b3-bce9-456e-a767-cd97c1160e4d-kube-api-access-qx6nw\") pod \"frr-k8s-webhook-server-78b44bf5bb-5n8d9\" (UID: \"d9bf26b3-bce9-456e-a767-cd97c1160e4d\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5n8d9" Feb 19 21:15:05 crc kubenswrapper[4886]: I0219 21:15:05.973938 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbzsp\" (UniqueName: \"kubernetes.io/projected/81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5-kube-api-access-lbzsp\") pod \"frr-k8s-x88rq\" (UID: \"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5\") " pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:06 crc kubenswrapper[4886]: I0219 21:15:06.053247 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf4k8\" (UniqueName: \"kubernetes.io/projected/224de83c-e009-463c-8d59-f2bfa7cd41a5-kube-api-access-rf4k8\") pod \"controller-69bbfbf88f-p4hpl\" (UID: \"224de83c-e009-463c-8d59-f2bfa7cd41a5\") " pod="metallb-system/controller-69bbfbf88f-p4hpl" Feb 19 21:15:06 crc kubenswrapper[4886]: I0219 21:15:06.053343 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8nvn\" (UniqueName: \"kubernetes.io/projected/6ae01887-56db-45d4-bf3e-0e66a8b3fed8-kube-api-access-t8nvn\") pod \"speaker-kkxcp\" (UID: \"6ae01887-56db-45d4-bf3e-0e66a8b3fed8\") " pod="metallb-system/speaker-kkxcp" Feb 19 21:15:06 crc kubenswrapper[4886]: I0219 21:15:06.053430 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6ae01887-56db-45d4-bf3e-0e66a8b3fed8-metallb-excludel2\") pod \"speaker-kkxcp\" (UID: \"6ae01887-56db-45d4-bf3e-0e66a8b3fed8\") " pod="metallb-system/speaker-kkxcp" Feb 19 21:15:06 crc kubenswrapper[4886]: I0219 21:15:06.053459 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6ae01887-56db-45d4-bf3e-0e66a8b3fed8-memberlist\") pod \"speaker-kkxcp\" (UID: \"6ae01887-56db-45d4-bf3e-0e66a8b3fed8\") " pod="metallb-system/speaker-kkxcp" Feb 19 21:15:06 crc kubenswrapper[4886]: I0219 21:15:06.053518 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/224de83c-e009-463c-8d59-f2bfa7cd41a5-cert\") pod \"controller-69bbfbf88f-p4hpl\" (UID: \"224de83c-e009-463c-8d59-f2bfa7cd41a5\") " pod="metallb-system/controller-69bbfbf88f-p4hpl" Feb 19 21:15:06 crc kubenswrapper[4886]: I0219 21:15:06.053539 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ae01887-56db-45d4-bf3e-0e66a8b3fed8-metrics-certs\") pod \"speaker-kkxcp\" (UID: \"6ae01887-56db-45d4-bf3e-0e66a8b3fed8\") " pod="metallb-system/speaker-kkxcp" Feb 19 21:15:06 crc kubenswrapper[4886]: I0219 21:15:06.053558 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/224de83c-e009-463c-8d59-f2bfa7cd41a5-metrics-certs\") pod \"controller-69bbfbf88f-p4hpl\" (UID: \"224de83c-e009-463c-8d59-f2bfa7cd41a5\") " pod="metallb-system/controller-69bbfbf88f-p4hpl" Feb 19 21:15:06 crc kubenswrapper[4886]: E0219 21:15:06.053609 4886 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 21:15:06 crc kubenswrapper[4886]: E0219 21:15:06.053673 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ae01887-56db-45d4-bf3e-0e66a8b3fed8-memberlist podName:6ae01887-56db-45d4-bf3e-0e66a8b3fed8 nodeName:}" failed. No retries permitted until 2026-02-19 21:15:06.553654572 +0000 UTC m=+937.181497622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6ae01887-56db-45d4-bf3e-0e66a8b3fed8-memberlist") pod "speaker-kkxcp" (UID: "6ae01887-56db-45d4-bf3e-0e66a8b3fed8") : secret "metallb-memberlist" not found Feb 19 21:15:06 crc kubenswrapper[4886]: I0219 21:15:06.054171 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6ae01887-56db-45d4-bf3e-0e66a8b3fed8-metallb-excludel2\") pod \"speaker-kkxcp\" (UID: \"6ae01887-56db-45d4-bf3e-0e66a8b3fed8\") " pod="metallb-system/speaker-kkxcp" Feb 19 21:15:06 crc kubenswrapper[4886]: I0219 21:15:06.056178 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ae01887-56db-45d4-bf3e-0e66a8b3fed8-metrics-certs\") pod \"speaker-kkxcp\" (UID: \"6ae01887-56db-45d4-bf3e-0e66a8b3fed8\") " pod="metallb-system/speaker-kkxcp" Feb 19 21:15:06 crc kubenswrapper[4886]: I0219 21:15:06.057023 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:06 crc kubenswrapper[4886]: I0219 21:15:06.068902 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5n8d9" Feb 19 21:15:06 crc kubenswrapper[4886]: I0219 21:15:06.069050 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8nvn\" (UniqueName: \"kubernetes.io/projected/6ae01887-56db-45d4-bf3e-0e66a8b3fed8-kube-api-access-t8nvn\") pod \"speaker-kkxcp\" (UID: \"6ae01887-56db-45d4-bf3e-0e66a8b3fed8\") " pod="metallb-system/speaker-kkxcp" Feb 19 21:15:06 crc kubenswrapper[4886]: I0219 21:15:06.154618 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf4k8\" (UniqueName: \"kubernetes.io/projected/224de83c-e009-463c-8d59-f2bfa7cd41a5-kube-api-access-rf4k8\") pod \"controller-69bbfbf88f-p4hpl\" (UID: \"224de83c-e009-463c-8d59-f2bfa7cd41a5\") " pod="metallb-system/controller-69bbfbf88f-p4hpl" Feb 19 21:15:06 crc kubenswrapper[4886]: I0219 21:15:06.155062 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/224de83c-e009-463c-8d59-f2bfa7cd41a5-cert\") pod \"controller-69bbfbf88f-p4hpl\" (UID: \"224de83c-e009-463c-8d59-f2bfa7cd41a5\") " pod="metallb-system/controller-69bbfbf88f-p4hpl" Feb 19 21:15:06 crc kubenswrapper[4886]: I0219 21:15:06.155094 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/224de83c-e009-463c-8d59-f2bfa7cd41a5-metrics-certs\") pod \"controller-69bbfbf88f-p4hpl\" (UID: \"224de83c-e009-463c-8d59-f2bfa7cd41a5\") " pod="metallb-system/controller-69bbfbf88f-p4hpl" Feb 19 21:15:06 crc kubenswrapper[4886]: E0219 21:15:06.155255 4886 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 19 21:15:06 crc kubenswrapper[4886]: E0219 21:15:06.155342 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/224de83c-e009-463c-8d59-f2bfa7cd41a5-metrics-certs podName:224de83c-e009-463c-8d59-f2bfa7cd41a5 nodeName:}" failed. No retries permitted until 2026-02-19 21:15:06.655326151 +0000 UTC m=+937.283169201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/224de83c-e009-463c-8d59-f2bfa7cd41a5-metrics-certs") pod "controller-69bbfbf88f-p4hpl" (UID: "224de83c-e009-463c-8d59-f2bfa7cd41a5") : secret "controller-certs-secret" not found Feb 19 21:15:06 crc kubenswrapper[4886]: I0219 21:15:06.157122 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 21:15:06 crc kubenswrapper[4886]: I0219 21:15:06.175675 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/224de83c-e009-463c-8d59-f2bfa7cd41a5-cert\") pod \"controller-69bbfbf88f-p4hpl\" (UID: \"224de83c-e009-463c-8d59-f2bfa7cd41a5\") " pod="metallb-system/controller-69bbfbf88f-p4hpl" Feb 19 21:15:06 crc kubenswrapper[4886]: I0219 21:15:06.175806 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf4k8\" (UniqueName: \"kubernetes.io/projected/224de83c-e009-463c-8d59-f2bfa7cd41a5-kube-api-access-rf4k8\") pod \"controller-69bbfbf88f-p4hpl\" (UID: \"224de83c-e009-463c-8d59-f2bfa7cd41a5\") " pod="metallb-system/controller-69bbfbf88f-p4hpl" Feb 19 21:15:06 crc kubenswrapper[4886]: I0219 21:15:06.508306 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-5n8d9"] Feb 19 21:15:06 crc kubenswrapper[4886]: I0219 21:15:06.561228 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6ae01887-56db-45d4-bf3e-0e66a8b3fed8-memberlist\") pod \"speaker-kkxcp\" (UID: \"6ae01887-56db-45d4-bf3e-0e66a8b3fed8\") " pod="metallb-system/speaker-kkxcp" Feb 19 21:15:06 crc kubenswrapper[4886]: E0219 21:15:06.561491 4886 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 21:15:06 crc kubenswrapper[4886]: E0219 21:15:06.561589 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ae01887-56db-45d4-bf3e-0e66a8b3fed8-memberlist podName:6ae01887-56db-45d4-bf3e-0e66a8b3fed8 nodeName:}" failed. No retries permitted until 2026-02-19 21:15:07.56156306 +0000 UTC m=+938.189406150 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6ae01887-56db-45d4-bf3e-0e66a8b3fed8-memberlist") pod "speaker-kkxcp" (UID: "6ae01887-56db-45d4-bf3e-0e66a8b3fed8") : secret "metallb-memberlist" not found Feb 19 21:15:06 crc kubenswrapper[4886]: I0219 21:15:06.663031 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/224de83c-e009-463c-8d59-f2bfa7cd41a5-metrics-certs\") pod \"controller-69bbfbf88f-p4hpl\" (UID: \"224de83c-e009-463c-8d59-f2bfa7cd41a5\") " pod="metallb-system/controller-69bbfbf88f-p4hpl" Feb 19 21:15:06 crc kubenswrapper[4886]: I0219 21:15:06.672255 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/224de83c-e009-463c-8d59-f2bfa7cd41a5-metrics-certs\") pod \"controller-69bbfbf88f-p4hpl\" (UID: \"224de83c-e009-463c-8d59-f2bfa7cd41a5\") " pod="metallb-system/controller-69bbfbf88f-p4hpl" Feb 19 21:15:06 crc kubenswrapper[4886]: I0219 21:15:06.776087 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-p4hpl" Feb 19 21:15:07 crc kubenswrapper[4886]: I0219 21:15:07.033419 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-p4hpl"] Feb 19 21:15:07 crc kubenswrapper[4886]: I0219 21:15:07.039426 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x88rq" event={"ID":"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5","Type":"ContainerStarted","Data":"60646e5ecf7ef899eda81fe75c04b87d332d5cd341073f13d6409b203d7f8179"} Feb 19 21:15:07 crc kubenswrapper[4886]: I0219 21:15:07.041214 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-p4hpl" event={"ID":"224de83c-e009-463c-8d59-f2bfa7cd41a5","Type":"ContainerStarted","Data":"efcd603149510f8ff0c305846c7aed8f798cf581f536e315c8e714c6144846e5"} Feb 19 21:15:07 crc kubenswrapper[4886]: I0219 21:15:07.042356 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5n8d9" event={"ID":"d9bf26b3-bce9-456e-a767-cd97c1160e4d","Type":"ContainerStarted","Data":"e728adba1dc439b27f77910755901926939229900453cd1ab569bef68470af39"} Feb 19 21:15:07 crc kubenswrapper[4886]: I0219 21:15:07.578375 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6ae01887-56db-45d4-bf3e-0e66a8b3fed8-memberlist\") pod \"speaker-kkxcp\" (UID: \"6ae01887-56db-45d4-bf3e-0e66a8b3fed8\") " pod="metallb-system/speaker-kkxcp" Feb 19 21:15:07 crc kubenswrapper[4886]: I0219 21:15:07.588174 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6ae01887-56db-45d4-bf3e-0e66a8b3fed8-memberlist\") pod \"speaker-kkxcp\" (UID: \"6ae01887-56db-45d4-bf3e-0e66a8b3fed8\") " pod="metallb-system/speaker-kkxcp" Feb 19 21:15:07 crc kubenswrapper[4886]: I0219 21:15:07.658947 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kkxcp" Feb 19 21:15:08 crc kubenswrapper[4886]: I0219 21:15:08.066099 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kkxcp" event={"ID":"6ae01887-56db-45d4-bf3e-0e66a8b3fed8","Type":"ContainerStarted","Data":"974911ae26ba191c8fa9c01572f536392a21bc77b4b2da90127e827f853a976c"} Feb 19 21:15:08 crc kubenswrapper[4886]: I0219 21:15:08.078898 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-p4hpl" event={"ID":"224de83c-e009-463c-8d59-f2bfa7cd41a5","Type":"ContainerStarted","Data":"d4340e05e2d22653efc858029ddca86a0a3e1543aa907a0cdd646a41a9bd94e6"} Feb 19 21:15:08 crc kubenswrapper[4886]: I0219 21:15:08.078942 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-p4hpl" event={"ID":"224de83c-e009-463c-8d59-f2bfa7cd41a5","Type":"ContainerStarted","Data":"d40e7b219bbd2bc68ddf810dceb37f843e7df8c7d7917967dade65e37b49afba"} Feb 19 21:15:08 crc kubenswrapper[4886]: I0219 21:15:08.079216 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-p4hpl" Feb 19 21:15:08 crc kubenswrapper[4886]: I0219 21:15:08.113477 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-p4hpl" podStartSLOduration=3.113256909 podStartE2EDuration="3.113256909s" podCreationTimestamp="2026-02-19 21:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:15:08.102707309 +0000 UTC m=+938.730550359" watchObservedRunningTime="2026-02-19 21:15:08.113256909 +0000 UTC m=+938.741099949" Feb 19 21:15:09 crc kubenswrapper[4886]: I0219 21:15:09.091613 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kkxcp" event={"ID":"6ae01887-56db-45d4-bf3e-0e66a8b3fed8","Type":"ContainerStarted","Data":"268794520c0ea9e4e77cb2f661c83c96a712288ad67cf254c34fe63307ee6837"} Feb 19 21:15:09 crc kubenswrapper[4886]: I0219 21:15:09.091962 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kkxcp" event={"ID":"6ae01887-56db-45d4-bf3e-0e66a8b3fed8","Type":"ContainerStarted","Data":"fb08d72322409fa5f59c8e9891ff26d8cf0fb13e540b63a6a296bcaa7c019ab1"} Feb 19 21:15:09 crc kubenswrapper[4886]: I0219 21:15:09.115479 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-kkxcp" podStartSLOduration=4.115461866 podStartE2EDuration="4.115461866s" podCreationTimestamp="2026-02-19 21:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:15:09.113139367 +0000 UTC m=+939.740982417" watchObservedRunningTime="2026-02-19 21:15:09.115461866 +0000 UTC m=+939.743304916" Feb 19 21:15:10 crc kubenswrapper[4886]: I0219 21:15:10.100471 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-kkxcp" Feb 19 21:15:15 crc kubenswrapper[4886]: I0219 21:15:15.137665 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5n8d9" event={"ID":"d9bf26b3-bce9-456e-a767-cd97c1160e4d","Type":"ContainerStarted","Data":"e69a0d7cb37fe1fd28f0d3cb005baff8d3f23c2684d506b1ec497eb72fe4dee2"} Feb 19 21:15:15 crc kubenswrapper[4886]: I0219 21:15:15.138385 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5n8d9" Feb 19 21:15:15 crc kubenswrapper[4886]: I0219 21:15:15.140104 4886 generic.go:334] "Generic (PLEG): container finished" podID="81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5" containerID="c62c7c34f5ece9ac594176383d41bb13992931660a32218ab1036999e4cdb177" exitCode=0 Feb 19 21:15:15 crc kubenswrapper[4886]: I0219 21:15:15.140160 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x88rq" event={"ID":"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5","Type":"ContainerDied","Data":"c62c7c34f5ece9ac594176383d41bb13992931660a32218ab1036999e4cdb177"} Feb 19 21:15:15 crc kubenswrapper[4886]: I0219 21:15:15.166010 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5n8d9" podStartSLOduration=2.337777189 podStartE2EDuration="10.165980525s" podCreationTimestamp="2026-02-19 21:15:05 +0000 UTC" firstStartedPulling="2026-02-19 21:15:06.520856489 +0000 UTC m=+937.148699539" lastFinishedPulling="2026-02-19 21:15:14.349059825 +0000 UTC m=+944.976902875" observedRunningTime="2026-02-19 21:15:15.156935874 +0000 UTC m=+945.784778934" watchObservedRunningTime="2026-02-19 21:15:15.165980525 +0000 UTC m=+945.793823605" Feb 19 21:15:16 crc kubenswrapper[4886]: I0219 21:15:16.152667 4886 generic.go:334] "Generic (PLEG): container finished" podID="81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5" containerID="56622290e32cb349cedf03affab3ff1b817b2c1ac555623bca492aecdb6b8a13" exitCode=0 Feb 19 21:15:16 crc kubenswrapper[4886]: I0219 21:15:16.152874 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x88rq" event={"ID":"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5","Type":"ContainerDied","Data":"56622290e32cb349cedf03affab3ff1b817b2c1ac555623bca492aecdb6b8a13"} Feb 19 21:15:17 crc kubenswrapper[4886]: I0219 21:15:17.163633 4886 generic.go:334] "Generic (PLEG): container finished" podID="81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5" containerID="6f6389cb1fd6fb62526c9b0b98c4067f98c87980307e7732b6b854c31220258d" exitCode=0 Feb 19 21:15:17 crc kubenswrapper[4886]: I0219 21:15:17.163738 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x88rq" event={"ID":"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5","Type":"ContainerDied","Data":"6f6389cb1fd6fb62526c9b0b98c4067f98c87980307e7732b6b854c31220258d"} Feb 19 21:15:18 crc kubenswrapper[4886]: I0219 21:15:18.175878 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x88rq" event={"ID":"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5","Type":"ContainerStarted","Data":"a1b1bdedc3ca14dab9d12ce7e0ee51747d59adec74e62a46f3c21f6d72f42a54"} Feb 19 21:15:18 crc kubenswrapper[4886]: I0219 21:15:18.176275 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x88rq" event={"ID":"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5","Type":"ContainerStarted","Data":"15d2f3f0019ec474123f6d8f49dfbc0c1b0628a5f6c781f062856b30e806ac99"} Feb 19 21:15:18 crc kubenswrapper[4886]: I0219 21:15:18.176293 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x88rq" event={"ID":"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5","Type":"ContainerStarted","Data":"9e19df04520220f88f2981c555f0824e0a20ed9495004dbd9e42c7b7751d83c4"} Feb 19 21:15:18 crc kubenswrapper[4886]: I0219 21:15:18.176304 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x88rq" event={"ID":"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5","Type":"ContainerStarted","Data":"635568fb4ab95e5da295d6830626e8657b24801b28b46144b2a90dbd81560f07"} Feb 19 21:15:18 crc kubenswrapper[4886]: I0219 21:15:18.176315 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x88rq" event={"ID":"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5","Type":"ContainerStarted","Data":"9af7a097ddc8f1936f52e2f19d37fe41537f158d2d8864fae5730724df97280d"} Feb 19 21:15:18 crc kubenswrapper[4886]: I0219 21:15:18.324370 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:15:18 crc kubenswrapper[4886]: I0219 21:15:18.324412 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:15:18 crc kubenswrapper[4886]: I0219 21:15:18.324452 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 21:15:18 crc kubenswrapper[4886]: I0219 21:15:18.325099 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8df8acb7039f9357ec20617d0239697fac24843c97f7ed406d68afe9849d624e"} pod="openshift-machine-config-operator/machine-config-daemon-6stm5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 21:15:18 crc kubenswrapper[4886]: I0219 21:15:18.325178 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" containerID="cri-o://8df8acb7039f9357ec20617d0239697fac24843c97f7ed406d68afe9849d624e" gracePeriod=600 Feb 19 21:15:19 crc kubenswrapper[4886]: I0219 21:15:19.188523 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x88rq" event={"ID":"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5","Type":"ContainerStarted","Data":"19fe5c04e230dfb8d152595b7ea371d0b40c1eabb8a1bfc7a952f057938bdc5c"} Feb 19 21:15:19 crc kubenswrapper[4886]: I0219 21:15:19.189406 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:19 crc kubenswrapper[4886]: I0219 21:15:19.193246 4886 generic.go:334] "Generic (PLEG): container finished" podID="b096c32d-4192-4529-bc55-b05d09004007" containerID="8df8acb7039f9357ec20617d0239697fac24843c97f7ed406d68afe9849d624e" exitCode=0 Feb 19 21:15:19 crc kubenswrapper[4886]: I0219 21:15:19.193293 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerDied","Data":"8df8acb7039f9357ec20617d0239697fac24843c97f7ed406d68afe9849d624e"} Feb 19 21:15:19 crc kubenswrapper[4886]: I0219 21:15:19.193337 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerStarted","Data":"07418cd70dea73874048c57bcddf9f82d5a0a608008d842b73583b9e639a54ec"} Feb 19 21:15:19 crc kubenswrapper[4886]: I0219 21:15:19.193357 4886 scope.go:117] "RemoveContainer" containerID="30855271763aa03f95502a62cdac34f3aaa5896ebbe6fec54e402ff34f490d71" Feb 19 21:15:19 crc kubenswrapper[4886]: I0219 21:15:19.228165 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-x88rq" podStartSLOduration=6.100488325 podStartE2EDuration="14.228148729s" podCreationTimestamp="2026-02-19 21:15:05 +0000 UTC" firstStartedPulling="2026-02-19 21:15:06.241814813 +0000 UTC m=+936.869657863" lastFinishedPulling="2026-02-19 21:15:14.369475217 +0000 UTC m=+944.997318267" observedRunningTime="2026-02-19 21:15:19.21877226 +0000 UTC m=+949.846615330" watchObservedRunningTime="2026-02-19 21:15:19.228148729 +0000 UTC m=+949.855991779" Feb 19 21:15:21 crc kubenswrapper[4886]: I0219 21:15:21.058176 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:21 crc kubenswrapper[4886]: I0219 21:15:21.111444 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:26 crc kubenswrapper[4886]: I0219 21:15:26.073714 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5n8d9" Feb 19 21:15:26 crc kubenswrapper[4886]: I0219 21:15:26.779395 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-p4hpl" Feb 19 21:15:27 crc kubenswrapper[4886]: I0219 21:15:27.666410 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-kkxcp" Feb 19 21:15:30 crc kubenswrapper[4886]: I0219 21:15:30.474021 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6wrp9"] Feb 19 21:15:30 crc kubenswrapper[4886]: I0219 21:15:30.475692 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6wrp9" Feb 19 21:15:30 crc kubenswrapper[4886]: I0219 21:15:30.477473 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 19 21:15:30 crc kubenswrapper[4886]: I0219 21:15:30.477550 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 19 21:15:30 crc kubenswrapper[4886]: I0219 21:15:30.480000 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-22287" Feb 19 21:15:30 crc kubenswrapper[4886]: I0219 21:15:30.481883 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6wrp9"] Feb 19 21:15:30 crc kubenswrapper[4886]: I0219 21:15:30.563610 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql7b5\" (UniqueName: \"kubernetes.io/projected/2faf7857-3b3e-48fc-908e-0ebbcc99c04e-kube-api-access-ql7b5\") pod \"openstack-operator-index-6wrp9\" (UID: \"2faf7857-3b3e-48fc-908e-0ebbcc99c04e\") " pod="openstack-operators/openstack-operator-index-6wrp9" Feb 19 21:15:30 crc kubenswrapper[4886]: I0219 21:15:30.665664 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql7b5\" (UniqueName: \"kubernetes.io/projected/2faf7857-3b3e-48fc-908e-0ebbcc99c04e-kube-api-access-ql7b5\") pod \"openstack-operator-index-6wrp9\" (UID: \"2faf7857-3b3e-48fc-908e-0ebbcc99c04e\") " pod="openstack-operators/openstack-operator-index-6wrp9" Feb 19 21:15:30 crc kubenswrapper[4886]: I0219 21:15:30.678919 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 19 21:15:30 crc kubenswrapper[4886]: I0219 21:15:30.689722 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 19 21:15:30 crc kubenswrapper[4886]: I0219 21:15:30.704381 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql7b5\" (UniqueName: \"kubernetes.io/projected/2faf7857-3b3e-48fc-908e-0ebbcc99c04e-kube-api-access-ql7b5\") pod \"openstack-operator-index-6wrp9\" (UID: \"2faf7857-3b3e-48fc-908e-0ebbcc99c04e\") " pod="openstack-operators/openstack-operator-index-6wrp9" Feb 19 21:15:30 crc kubenswrapper[4886]: I0219 21:15:30.798284 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-22287" Feb 19 21:15:30 crc kubenswrapper[4886]: I0219 21:15:30.806998 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6wrp9" Feb 19 21:15:31 crc kubenswrapper[4886]: I0219 21:15:31.301760 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6wrp9"] Feb 19 21:15:31 crc kubenswrapper[4886]: W0219 21:15:31.308456 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2faf7857_3b3e_48fc_908e_0ebbcc99c04e.slice/crio-908e596b9f74c3d0b6467d655e43f3991f62bfc897b9fae7fb9303bda2859495 WatchSource:0}: Error finding container 908e596b9f74c3d0b6467d655e43f3991f62bfc897b9fae7fb9303bda2859495: Status 404 returned error can't find the container with id 908e596b9f74c3d0b6467d655e43f3991f62bfc897b9fae7fb9303bda2859495 Feb 19 21:15:32 crc kubenswrapper[4886]: I0219 21:15:32.308331 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6wrp9" event={"ID":"2faf7857-3b3e-48fc-908e-0ebbcc99c04e","Type":"ContainerStarted","Data":"908e596b9f74c3d0b6467d655e43f3991f62bfc897b9fae7fb9303bda2859495"} Feb 19 21:15:33 crc kubenswrapper[4886]: I0219 21:15:33.859231 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-6wrp9"] Feb 19 21:15:34 crc kubenswrapper[4886]: I0219 21:15:34.330788 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6wrp9" event={"ID":"2faf7857-3b3e-48fc-908e-0ebbcc99c04e","Type":"ContainerStarted","Data":"d87ae024606336b3880a61e4f41485056ff2943c072199a4c9fb82ef9b7a5aad"} Feb 19 21:15:34 crc kubenswrapper[4886]: I0219 21:15:34.330995 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-6wrp9" podUID="2faf7857-3b3e-48fc-908e-0ebbcc99c04e" containerName="registry-server" containerID="cri-o://d87ae024606336b3880a61e4f41485056ff2943c072199a4c9fb82ef9b7a5aad" gracePeriod=2 Feb 19 21:15:34 crc kubenswrapper[4886]: I0219 21:15:34.357606 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6wrp9" podStartSLOduration=1.9292272910000001 podStartE2EDuration="4.357578757s" podCreationTimestamp="2026-02-19 21:15:30 +0000 UTC" firstStartedPulling="2026-02-19 21:15:31.310672473 +0000 UTC m=+961.938515523" lastFinishedPulling="2026-02-19 21:15:33.739023929 +0000 UTC m=+964.366866989" observedRunningTime="2026-02-19 21:15:34.349742246 +0000 UTC m=+964.977585316" watchObservedRunningTime="2026-02-19 21:15:34.357578757 +0000 UTC m=+964.985421837" Feb 19 21:15:34 crc kubenswrapper[4886]: I0219 21:15:34.485048 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-fvkbm"] Feb 19 21:15:34 crc kubenswrapper[4886]: I0219 21:15:34.487007 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fvkbm" Feb 19 21:15:34 crc kubenswrapper[4886]: I0219 21:15:34.503979 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fvkbm"] Feb 19 21:15:34 crc kubenswrapper[4886]: I0219 21:15:34.632309 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x622k\" (UniqueName: \"kubernetes.io/projected/3c83b560-7a5e-4c22-82dd-63e1a422e0d6-kube-api-access-x622k\") pod \"openstack-operator-index-fvkbm\" (UID: \"3c83b560-7a5e-4c22-82dd-63e1a422e0d6\") " pod="openstack-operators/openstack-operator-index-fvkbm" Feb 19 21:15:34 crc kubenswrapper[4886]: I0219 21:15:34.733721 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x622k\" (UniqueName: \"kubernetes.io/projected/3c83b560-7a5e-4c22-82dd-63e1a422e0d6-kube-api-access-x622k\") pod \"openstack-operator-index-fvkbm\" (UID: \"3c83b560-7a5e-4c22-82dd-63e1a422e0d6\") " pod="openstack-operators/openstack-operator-index-fvkbm" Feb 19 21:15:34 crc kubenswrapper[4886]: I0219 21:15:34.760507 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x622k\" (UniqueName: \"kubernetes.io/projected/3c83b560-7a5e-4c22-82dd-63e1a422e0d6-kube-api-access-x622k\") pod \"openstack-operator-index-fvkbm\" (UID: \"3c83b560-7a5e-4c22-82dd-63e1a422e0d6\") " pod="openstack-operators/openstack-operator-index-fvkbm" Feb 19 21:15:34 crc kubenswrapper[4886]: I0219 21:15:34.835416 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fvkbm" Feb 19 21:15:34 crc kubenswrapper[4886]: I0219 21:15:34.941104 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6wrp9" Feb 19 21:15:35 crc kubenswrapper[4886]: I0219 21:15:35.041381 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql7b5\" (UniqueName: \"kubernetes.io/projected/2faf7857-3b3e-48fc-908e-0ebbcc99c04e-kube-api-access-ql7b5\") pod \"2faf7857-3b3e-48fc-908e-0ebbcc99c04e\" (UID: \"2faf7857-3b3e-48fc-908e-0ebbcc99c04e\") " Feb 19 21:15:35 crc kubenswrapper[4886]: I0219 21:15:35.047585 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2faf7857-3b3e-48fc-908e-0ebbcc99c04e-kube-api-access-ql7b5" (OuterVolumeSpecName: "kube-api-access-ql7b5") pod "2faf7857-3b3e-48fc-908e-0ebbcc99c04e" (UID: "2faf7857-3b3e-48fc-908e-0ebbcc99c04e"). InnerVolumeSpecName "kube-api-access-ql7b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:15:35 crc kubenswrapper[4886]: I0219 21:15:35.143444 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql7b5\" (UniqueName: \"kubernetes.io/projected/2faf7857-3b3e-48fc-908e-0ebbcc99c04e-kube-api-access-ql7b5\") on node \"crc\" DevicePath \"\"" Feb 19 21:15:35 crc kubenswrapper[4886]: I0219 21:15:35.295469 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fvkbm"] Feb 19 21:15:35 crc kubenswrapper[4886]: I0219 21:15:35.342096 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fvkbm" event={"ID":"3c83b560-7a5e-4c22-82dd-63e1a422e0d6","Type":"ContainerStarted","Data":"38d9086b059e0f063222e319fb282244bfaab488988c0cd807ee3e2c733e0a82"} Feb 19 21:15:35 crc kubenswrapper[4886]: I0219 21:15:35.344136 4886 generic.go:334] "Generic (PLEG): container finished" podID="2faf7857-3b3e-48fc-908e-0ebbcc99c04e" containerID="d87ae024606336b3880a61e4f41485056ff2943c072199a4c9fb82ef9b7a5aad" exitCode=0 Feb 19 21:15:35 crc kubenswrapper[4886]: I0219 21:15:35.344165 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6wrp9" event={"ID":"2faf7857-3b3e-48fc-908e-0ebbcc99c04e","Type":"ContainerDied","Data":"d87ae024606336b3880a61e4f41485056ff2943c072199a4c9fb82ef9b7a5aad"} Feb 19 21:15:35 crc kubenswrapper[4886]: I0219 21:15:35.344182 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6wrp9" event={"ID":"2faf7857-3b3e-48fc-908e-0ebbcc99c04e","Type":"ContainerDied","Data":"908e596b9f74c3d0b6467d655e43f3991f62bfc897b9fae7fb9303bda2859495"} Feb 19 21:15:35 crc kubenswrapper[4886]: I0219 21:15:35.344214 4886 scope.go:117] "RemoveContainer" containerID="d87ae024606336b3880a61e4f41485056ff2943c072199a4c9fb82ef9b7a5aad" Feb 19 21:15:35 crc kubenswrapper[4886]: I0219 21:15:35.344348 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6wrp9" Feb 19 21:15:35 crc kubenswrapper[4886]: I0219 21:15:35.376455 4886 scope.go:117] "RemoveContainer" containerID="d87ae024606336b3880a61e4f41485056ff2943c072199a4c9fb82ef9b7a5aad" Feb 19 21:15:35 crc kubenswrapper[4886]: E0219 21:15:35.377015 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d87ae024606336b3880a61e4f41485056ff2943c072199a4c9fb82ef9b7a5aad\": container with ID starting with d87ae024606336b3880a61e4f41485056ff2943c072199a4c9fb82ef9b7a5aad not found: ID does not exist" containerID="d87ae024606336b3880a61e4f41485056ff2943c072199a4c9fb82ef9b7a5aad" Feb 19 21:15:35 crc kubenswrapper[4886]: I0219 21:15:35.377056 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d87ae024606336b3880a61e4f41485056ff2943c072199a4c9fb82ef9b7a5aad"} err="failed to get container status \"d87ae024606336b3880a61e4f41485056ff2943c072199a4c9fb82ef9b7a5aad\": rpc error: code = NotFound desc = could not find container \"d87ae024606336b3880a61e4f41485056ff2943c072199a4c9fb82ef9b7a5aad\": container with ID starting with d87ae024606336b3880a61e4f41485056ff2943c072199a4c9fb82ef9b7a5aad not found: ID does not exist" Feb 19 21:15:35 crc kubenswrapper[4886]: I0219 21:15:35.382812 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-6wrp9"] Feb 19 21:15:35 crc kubenswrapper[4886]: I0219 21:15:35.390461 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-6wrp9"] Feb 19 21:15:36 crc kubenswrapper[4886]: I0219 21:15:36.060988 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-x88rq" Feb 19 21:15:36 crc kubenswrapper[4886]: I0219 21:15:36.356640 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fvkbm" event={"ID":"3c83b560-7a5e-4c22-82dd-63e1a422e0d6","Type":"ContainerStarted","Data":"59e81e7ab1a953cd66e6d2862dd31be6b0f8e33f3325a79563904857dafd06e5"} Feb 19 21:15:36 crc kubenswrapper[4886]: I0219 21:15:36.379081 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-fvkbm" podStartSLOduration=2.309967932 podStartE2EDuration="2.379061218s" podCreationTimestamp="2026-02-19 21:15:34 +0000 UTC" firstStartedPulling="2026-02-19 21:15:35.3072366 +0000 UTC m=+965.935079660" lastFinishedPulling="2026-02-19 21:15:35.376329876 +0000 UTC m=+966.004172946" observedRunningTime="2026-02-19 21:15:36.374314776 +0000 UTC m=+967.002157826" watchObservedRunningTime="2026-02-19 21:15:36.379061218 +0000 UTC m=+967.006904268" Feb 19 21:15:36 crc kubenswrapper[4886]: I0219 21:15:36.609911 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2faf7857-3b3e-48fc-908e-0ebbcc99c04e" path="/var/lib/kubelet/pods/2faf7857-3b3e-48fc-908e-0ebbcc99c04e/volumes" Feb 19 21:15:44 crc kubenswrapper[4886]: I0219 21:15:44.839460 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-fvkbm" Feb 19 21:15:44 crc kubenswrapper[4886]: I0219 21:15:44.839816 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-fvkbm" Feb 19 21:15:44 crc kubenswrapper[4886]: I0219 21:15:44.904704 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-fvkbm" Feb 19 21:15:45 crc kubenswrapper[4886]: I0219 21:15:45.461339 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-fvkbm" Feb 19 21:15:51 crc kubenswrapper[4886]: I0219 21:15:51.181781 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj"] Feb 19 21:15:51 crc kubenswrapper[4886]: E0219 21:15:51.182689 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2faf7857-3b3e-48fc-908e-0ebbcc99c04e" containerName="registry-server" Feb 19 21:15:51 crc kubenswrapper[4886]: I0219 21:15:51.182705 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2faf7857-3b3e-48fc-908e-0ebbcc99c04e" containerName="registry-server" Feb 19 21:15:51 crc kubenswrapper[4886]: I0219 21:15:51.182889 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2faf7857-3b3e-48fc-908e-0ebbcc99c04e" containerName="registry-server" Feb 19 21:15:51 crc kubenswrapper[4886]: I0219 21:15:51.183946 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj" Feb 19 21:15:51 crc kubenswrapper[4886]: I0219 21:15:51.186134 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-dcrg6" Feb 19 21:15:51 crc kubenswrapper[4886]: I0219 21:15:51.195424 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj"] Feb 19 21:15:51 crc kubenswrapper[4886]: I0219 21:15:51.338694 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks9nc\" (UniqueName: \"kubernetes.io/projected/bf38fe31-4897-4315-bf39-18f5984b4df5-kube-api-access-ks9nc\") pod \"533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj\" (UID: \"bf38fe31-4897-4315-bf39-18f5984b4df5\") " pod="openstack-operators/533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj" Feb 19 21:15:51 crc kubenswrapper[4886]: I0219 21:15:51.338755 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bf38fe31-4897-4315-bf39-18f5984b4df5-util\") pod \"533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj\" (UID: \"bf38fe31-4897-4315-bf39-18f5984b4df5\") " pod="openstack-operators/533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj" Feb 19 21:15:51 crc kubenswrapper[4886]: I0219 21:15:51.338787 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bf38fe31-4897-4315-bf39-18f5984b4df5-bundle\") pod \"533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj\" (UID: \"bf38fe31-4897-4315-bf39-18f5984b4df5\") " pod="openstack-operators/533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj" Feb 19 21:15:51 crc kubenswrapper[4886]: I0219 21:15:51.440224 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bf38fe31-4897-4315-bf39-18f5984b4df5-bundle\") pod \"533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj\" (UID: \"bf38fe31-4897-4315-bf39-18f5984b4df5\") " pod="openstack-operators/533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj" Feb 19 21:15:51 crc kubenswrapper[4886]: I0219 21:15:51.440378 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks9nc\" (UniqueName: \"kubernetes.io/projected/bf38fe31-4897-4315-bf39-18f5984b4df5-kube-api-access-ks9nc\") pod \"533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj\" (UID: \"bf38fe31-4897-4315-bf39-18f5984b4df5\") " pod="openstack-operators/533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj" Feb 19 21:15:51 crc kubenswrapper[4886]: I0219 21:15:51.440410 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bf38fe31-4897-4315-bf39-18f5984b4df5-util\") pod \"533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj\" (UID: \"bf38fe31-4897-4315-bf39-18f5984b4df5\") " pod="openstack-operators/533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj" Feb 19 21:15:51 crc kubenswrapper[4886]: I0219 21:15:51.440882 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bf38fe31-4897-4315-bf39-18f5984b4df5-util\") pod \"533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj\" (UID: \"bf38fe31-4897-4315-bf39-18f5984b4df5\") " pod="openstack-operators/533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj" Feb 19 21:15:51 crc kubenswrapper[4886]: I0219 21:15:51.440907 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bf38fe31-4897-4315-bf39-18f5984b4df5-bundle\") pod \"533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj\" (UID: \"bf38fe31-4897-4315-bf39-18f5984b4df5\") " pod="openstack-operators/533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj" Feb 19 21:15:51 crc kubenswrapper[4886]: I0219 21:15:51.460593 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks9nc\" (UniqueName: \"kubernetes.io/projected/bf38fe31-4897-4315-bf39-18f5984b4df5-kube-api-access-ks9nc\") pod \"533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj\" (UID: \"bf38fe31-4897-4315-bf39-18f5984b4df5\") " pod="openstack-operators/533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj" Feb 19 21:15:51 crc kubenswrapper[4886]: I0219 21:15:51.509977 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj" Feb 19 21:15:51 crc kubenswrapper[4886]: I0219 21:15:51.968307 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj"] Feb 19 21:15:52 crc kubenswrapper[4886]: I0219 21:15:52.496732 4886 generic.go:334] "Generic (PLEG): container finished" podID="bf38fe31-4897-4315-bf39-18f5984b4df5" containerID="90d65c705cd44cafd1218be825738ac8844d10fb5999a7aafde1070c3f24401e" exitCode=0 Feb 19 21:15:52 crc kubenswrapper[4886]: I0219 21:15:52.496818 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj" event={"ID":"bf38fe31-4897-4315-bf39-18f5984b4df5","Type":"ContainerDied","Data":"90d65c705cd44cafd1218be825738ac8844d10fb5999a7aafde1070c3f24401e"} Feb 19 21:15:52 crc kubenswrapper[4886]: I0219 21:15:52.497151 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj" event={"ID":"bf38fe31-4897-4315-bf39-18f5984b4df5","Type":"ContainerStarted","Data":"cb4e1513d2d50fc4b4ed8429c908f7f0eab4407d0e27b897a50159ec230fc175"} Feb 19 21:15:53 crc kubenswrapper[4886]: I0219 21:15:53.506518 4886 generic.go:334] "Generic (PLEG): container finished" podID="bf38fe31-4897-4315-bf39-18f5984b4df5" containerID="44c156f6131e49b15fefcc679798e831604a6ea8c5d4b103de5f11366be69005" exitCode=0 Feb 19 21:15:53 crc kubenswrapper[4886]: I0219 21:15:53.507164 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj" event={"ID":"bf38fe31-4897-4315-bf39-18f5984b4df5","Type":"ContainerDied","Data":"44c156f6131e49b15fefcc679798e831604a6ea8c5d4b103de5f11366be69005"} Feb 19 21:15:54 crc kubenswrapper[4886]: I0219 21:15:54.515195 4886 generic.go:334] "Generic (PLEG): container finished" podID="bf38fe31-4897-4315-bf39-18f5984b4df5" containerID="dd4ef1094683f353226781bbabea7049194949601a18bc246c495a91897957bd" exitCode=0 Feb 19 21:15:54 crc kubenswrapper[4886]: I0219 21:15:54.515280 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj" event={"ID":"bf38fe31-4897-4315-bf39-18f5984b4df5","Type":"ContainerDied","Data":"dd4ef1094683f353226781bbabea7049194949601a18bc246c495a91897957bd"} Feb 19 21:15:55 crc kubenswrapper[4886]: I0219 21:15:55.875634 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj" Feb 19 21:15:56 crc kubenswrapper[4886]: I0219 21:15:56.030153 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bf38fe31-4897-4315-bf39-18f5984b4df5-bundle\") pod \"bf38fe31-4897-4315-bf39-18f5984b4df5\" (UID: \"bf38fe31-4897-4315-bf39-18f5984b4df5\") " Feb 19 21:15:56 crc kubenswrapper[4886]: I0219 21:15:56.030547 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks9nc\" (UniqueName: \"kubernetes.io/projected/bf38fe31-4897-4315-bf39-18f5984b4df5-kube-api-access-ks9nc\") pod \"bf38fe31-4897-4315-bf39-18f5984b4df5\" (UID: \"bf38fe31-4897-4315-bf39-18f5984b4df5\") " Feb 19 21:15:56 crc kubenswrapper[4886]: I0219 21:15:56.030737 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bf38fe31-4897-4315-bf39-18f5984b4df5-util\") pod \"bf38fe31-4897-4315-bf39-18f5984b4df5\" (UID: \"bf38fe31-4897-4315-bf39-18f5984b4df5\") " Feb 19 21:15:56 crc kubenswrapper[4886]: I0219 21:15:56.031061 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf38fe31-4897-4315-bf39-18f5984b4df5-bundle" (OuterVolumeSpecName: "bundle") pod "bf38fe31-4897-4315-bf39-18f5984b4df5" (UID: "bf38fe31-4897-4315-bf39-18f5984b4df5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:15:56 crc kubenswrapper[4886]: I0219 21:15:56.031543 4886 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bf38fe31-4897-4315-bf39-18f5984b4df5-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:15:56 crc kubenswrapper[4886]: I0219 21:15:56.041135 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf38fe31-4897-4315-bf39-18f5984b4df5-kube-api-access-ks9nc" (OuterVolumeSpecName: "kube-api-access-ks9nc") pod "bf38fe31-4897-4315-bf39-18f5984b4df5" (UID: "bf38fe31-4897-4315-bf39-18f5984b4df5"). InnerVolumeSpecName "kube-api-access-ks9nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:15:56 crc kubenswrapper[4886]: I0219 21:15:56.051076 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf38fe31-4897-4315-bf39-18f5984b4df5-util" (OuterVolumeSpecName: "util") pod "bf38fe31-4897-4315-bf39-18f5984b4df5" (UID: "bf38fe31-4897-4315-bf39-18f5984b4df5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:15:56 crc kubenswrapper[4886]: I0219 21:15:56.133318 4886 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bf38fe31-4897-4315-bf39-18f5984b4df5-util\") on node \"crc\" DevicePath \"\"" Feb 19 21:15:56 crc kubenswrapper[4886]: I0219 21:15:56.133599 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks9nc\" (UniqueName: \"kubernetes.io/projected/bf38fe31-4897-4315-bf39-18f5984b4df5-kube-api-access-ks9nc\") on node \"crc\" DevicePath \"\"" Feb 19 21:15:56 crc kubenswrapper[4886]: I0219 21:15:56.536735 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj" event={"ID":"bf38fe31-4897-4315-bf39-18f5984b4df5","Type":"ContainerDied","Data":"cb4e1513d2d50fc4b4ed8429c908f7f0eab4407d0e27b897a50159ec230fc175"} Feb 19 21:15:56 crc kubenswrapper[4886]: I0219 21:15:56.536778 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb4e1513d2d50fc4b4ed8429c908f7f0eab4407d0e27b897a50159ec230fc175" Feb 19 21:15:56 crc kubenswrapper[4886]: I0219 21:15:56.536813 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/533192a6fb53e10c2ef2ce684c936d5ab0d492d89dde7cccee1730ba80g2brj" Feb 19 21:16:03 crc kubenswrapper[4886]: I0219 21:16:03.176854 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5576fd9fcc-hfcqz"] Feb 19 21:16:03 crc kubenswrapper[4886]: E0219 21:16:03.177566 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf38fe31-4897-4315-bf39-18f5984b4df5" containerName="pull" Feb 19 21:16:03 crc kubenswrapper[4886]: I0219 21:16:03.177579 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf38fe31-4897-4315-bf39-18f5984b4df5" containerName="pull" Feb 19 21:16:03 crc kubenswrapper[4886]: E0219 21:16:03.177606 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf38fe31-4897-4315-bf39-18f5984b4df5" containerName="util" Feb 19 21:16:03 crc kubenswrapper[4886]: I0219 21:16:03.177612 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf38fe31-4897-4315-bf39-18f5984b4df5" containerName="util" Feb 19 21:16:03 crc kubenswrapper[4886]: E0219 21:16:03.177631 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf38fe31-4897-4315-bf39-18f5984b4df5" containerName="extract" Feb 19 21:16:03 crc kubenswrapper[4886]: I0219 21:16:03.177637 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf38fe31-4897-4315-bf39-18f5984b4df5" containerName="extract" Feb 19 21:16:03 crc kubenswrapper[4886]: I0219 21:16:03.177803 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf38fe31-4897-4315-bf39-18f5984b4df5" containerName="extract" Feb 19 21:16:03 crc kubenswrapper[4886]: I0219 21:16:03.178406 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5576fd9fcc-hfcqz" Feb 19 21:16:03 crc kubenswrapper[4886]: I0219 21:16:03.181005 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-zr5tg" Feb 19 21:16:03 crc kubenswrapper[4886]: I0219 21:16:03.226401 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5576fd9fcc-hfcqz"] Feb 19 21:16:03 crc kubenswrapper[4886]: I0219 21:16:03.270297 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djjts\" (UniqueName: \"kubernetes.io/projected/5537cddb-9e8f-4097-9228-e741c3145b56-kube-api-access-djjts\") pod \"openstack-operator-controller-init-5576fd9fcc-hfcqz\" (UID: \"5537cddb-9e8f-4097-9228-e741c3145b56\") " pod="openstack-operators/openstack-operator-controller-init-5576fd9fcc-hfcqz" Feb 19 21:16:03 crc kubenswrapper[4886]: I0219 21:16:03.372071 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djjts\" (UniqueName: \"kubernetes.io/projected/5537cddb-9e8f-4097-9228-e741c3145b56-kube-api-access-djjts\") pod \"openstack-operator-controller-init-5576fd9fcc-hfcqz\" (UID: \"5537cddb-9e8f-4097-9228-e741c3145b56\") " pod="openstack-operators/openstack-operator-controller-init-5576fd9fcc-hfcqz" Feb 19 21:16:03 crc kubenswrapper[4886]: I0219 21:16:03.391129 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djjts\" (UniqueName: \"kubernetes.io/projected/5537cddb-9e8f-4097-9228-e741c3145b56-kube-api-access-djjts\") pod \"openstack-operator-controller-init-5576fd9fcc-hfcqz\" (UID: \"5537cddb-9e8f-4097-9228-e741c3145b56\") " pod="openstack-operators/openstack-operator-controller-init-5576fd9fcc-hfcqz" Feb 19 21:16:03 crc kubenswrapper[4886]: I0219 21:16:03.508014 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5576fd9fcc-hfcqz" Feb 19 21:16:03 crc kubenswrapper[4886]: I0219 21:16:03.957810 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5576fd9fcc-hfcqz"] Feb 19 21:16:03 crc kubenswrapper[4886]: W0219 21:16:03.961477 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5537cddb_9e8f_4097_9228_e741c3145b56.slice/crio-3a81ea78e6d829939d251c0b4e5efde1d8996eddd6cee3a05a8860bc6472e38d WatchSource:0}: Error finding container 3a81ea78e6d829939d251c0b4e5efde1d8996eddd6cee3a05a8860bc6472e38d: Status 404 returned error can't find the container with id 3a81ea78e6d829939d251c0b4e5efde1d8996eddd6cee3a05a8860bc6472e38d Feb 19 21:16:04 crc kubenswrapper[4886]: I0219 21:16:04.652567 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5576fd9fcc-hfcqz" event={"ID":"5537cddb-9e8f-4097-9228-e741c3145b56","Type":"ContainerStarted","Data":"3a81ea78e6d829939d251c0b4e5efde1d8996eddd6cee3a05a8860bc6472e38d"} Feb 19 21:16:08 crc kubenswrapper[4886]: I0219 21:16:08.639299 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5576fd9fcc-hfcqz" event={"ID":"5537cddb-9e8f-4097-9228-e741c3145b56","Type":"ContainerStarted","Data":"d7cd95c3f63073b9a4c57759f15b90d7c96276cd1173dd65d5c8316de69c0299"} Feb 19 21:16:08 crc kubenswrapper[4886]: I0219 21:16:08.639923 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5576fd9fcc-hfcqz" Feb 19 21:16:08 crc kubenswrapper[4886]: I0219 21:16:08.690778 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5576fd9fcc-hfcqz" podStartSLOduration=1.672361064 podStartE2EDuration="5.690751015s" podCreationTimestamp="2026-02-19 21:16:03 +0000 UTC" firstStartedPulling="2026-02-19 21:16:03.965221357 +0000 UTC m=+994.593064417" lastFinishedPulling="2026-02-19 21:16:07.983611318 +0000 UTC m=+998.611454368" observedRunningTime="2026-02-19 21:16:08.684655391 +0000 UTC m=+999.312498461" watchObservedRunningTime="2026-02-19 21:16:08.690751015 +0000 UTC m=+999.318594105" Feb 19 21:16:13 crc kubenswrapper[4886]: I0219 21:16:13.513880 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5576fd9fcc-hfcqz" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.391333 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-r2k2r"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.392634 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-r2k2r" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.395935 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-mb8rr" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.400698 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-ggvsr"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.401886 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ggvsr" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.409322 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-rcv5t" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.415359 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-r2k2r"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.420343 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-ggvsr"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.431954 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-dcg9l"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.432949 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dcg9l" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.439611 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-42zjk" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.446826 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vg2v\" (UniqueName: \"kubernetes.io/projected/06cb83ff-29f5-438f-87b0-32bb5899552d-kube-api-access-8vg2v\") pod \"designate-operator-controller-manager-6d8bf5c495-dcg9l\" (UID: \"06cb83ff-29f5-438f-87b0-32bb5899552d\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dcg9l" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.446927 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76ct4\" (UniqueName: \"kubernetes.io/projected/d5e2840a-8a17-4ddd-92e5-d033222d3dee-kube-api-access-76ct4\") pod \"cinder-operator-controller-manager-5d946d989d-ggvsr\" (UID: \"d5e2840a-8a17-4ddd-92e5-d033222d3dee\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ggvsr" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.447035 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kxpd\" (UniqueName: \"kubernetes.io/projected/a87f6938-30e5-4481-ba31-246084feaa8a-kube-api-access-7kxpd\") pod \"barbican-operator-controller-manager-868647ff47-r2k2r\" (UID: \"a87f6938-30e5-4481-ba31-246084feaa8a\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-r2k2r" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.453181 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-dcg9l"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.460606 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-dm5th"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.461579 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-dm5th" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.495586 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-dm5th"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.496076 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-pfmt6" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.542163 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-qtwh6"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.550310 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qtwh6" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.563880 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vg2v\" (UniqueName: \"kubernetes.io/projected/06cb83ff-29f5-438f-87b0-32bb5899552d-kube-api-access-8vg2v\") pod \"designate-operator-controller-manager-6d8bf5c495-dcg9l\" (UID: \"06cb83ff-29f5-438f-87b0-32bb5899552d\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dcg9l" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.563955 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4x6f\" (UniqueName: \"kubernetes.io/projected/6b93dd73-4b64-418b-aa60-511213b8f1fd-kube-api-access-g4x6f\") pod \"glance-operator-controller-manager-77987464f4-dm5th\" (UID: \"6b93dd73-4b64-418b-aa60-511213b8f1fd\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-dm5th" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.564040 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76ct4\" (UniqueName: \"kubernetes.io/projected/d5e2840a-8a17-4ddd-92e5-d033222d3dee-kube-api-access-76ct4\") pod \"cinder-operator-controller-manager-5d946d989d-ggvsr\" (UID: \"d5e2840a-8a17-4ddd-92e5-d033222d3dee\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ggvsr" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.564145 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kxpd\" (UniqueName: \"kubernetes.io/projected/a87f6938-30e5-4481-ba31-246084feaa8a-kube-api-access-7kxpd\") pod \"barbican-operator-controller-manager-868647ff47-r2k2r\" (UID: \"a87f6938-30e5-4481-ba31-246084feaa8a\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-r2k2r" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.567787 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-6gz5n" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.570255 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-mvk6x"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.575356 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-mvk6x" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.581366 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-hjl7x" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.605304 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-qtwh6"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.606207 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kxpd\" (UniqueName: \"kubernetes.io/projected/a87f6938-30e5-4481-ba31-246084feaa8a-kube-api-access-7kxpd\") pod \"barbican-operator-controller-manager-868647ff47-r2k2r\" (UID: \"a87f6938-30e5-4481-ba31-246084feaa8a\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-r2k2r" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.620998 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76ct4\" (UniqueName: \"kubernetes.io/projected/d5e2840a-8a17-4ddd-92e5-d033222d3dee-kube-api-access-76ct4\") pod \"cinder-operator-controller-manager-5d946d989d-ggvsr\" (UID: \"d5e2840a-8a17-4ddd-92e5-d033222d3dee\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ggvsr" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.625820 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vg2v\" (UniqueName: \"kubernetes.io/projected/06cb83ff-29f5-438f-87b0-32bb5899552d-kube-api-access-8vg2v\") pod \"designate-operator-controller-manager-6d8bf5c495-dcg9l\" (UID: \"06cb83ff-29f5-438f-87b0-32bb5899552d\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dcg9l" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.634523 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-mvk6x"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.634925 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-6d62b"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.635962 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-6d62b" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.641565 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-prx7c" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.641716 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.656640 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-6d62b"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.668745 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-l59mx"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.669051 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4x6f\" (UniqueName: \"kubernetes.io/projected/6b93dd73-4b64-418b-aa60-511213b8f1fd-kube-api-access-g4x6f\") pod \"glance-operator-controller-manager-77987464f4-dm5th\" (UID: \"6b93dd73-4b64-418b-aa60-511213b8f1fd\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-dm5th" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.669093 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2489\" (UniqueName: \"kubernetes.io/projected/b6a90270-aa6d-4792-96bc-333bff7f15df-kube-api-access-j2489\") pod \"horizon-operator-controller-manager-5b9b8895d5-mvk6x\" (UID: \"b6a90270-aa6d-4792-96bc-333bff7f15df\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-mvk6x" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.669122 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef72a766-0d85-430a-ab0d-f0eda86f582f-cert\") pod \"infra-operator-controller-manager-79d975b745-6d62b\" (UID: \"ef72a766-0d85-430a-ab0d-f0eda86f582f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-6d62b" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.669145 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lg9n\" (UniqueName: \"kubernetes.io/projected/ef72a766-0d85-430a-ab0d-f0eda86f582f-kube-api-access-6lg9n\") pod \"infra-operator-controller-manager-79d975b745-6d62b\" (UID: \"ef72a766-0d85-430a-ab0d-f0eda86f582f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-6d62b" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.669204 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbnsn\" (UniqueName: \"kubernetes.io/projected/4906db46-96ab-4aac-8aac-ba1532087aa2-kube-api-access-jbnsn\") pod \"heat-operator-controller-manager-69f49c598c-qtwh6\" (UID: \"4906db46-96ab-4aac-8aac-ba1532087aa2\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qtwh6" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.670721 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-l59mx" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.673901 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-zr8m4" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.694252 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-6kxv6"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.700116 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6kxv6" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.703769 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-dqz9j" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.717277 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4x6f\" (UniqueName: \"kubernetes.io/projected/6b93dd73-4b64-418b-aa60-511213b8f1fd-kube-api-access-g4x6f\") pod \"glance-operator-controller-manager-77987464f4-dm5th\" (UID: \"6b93dd73-4b64-418b-aa60-511213b8f1fd\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-dm5th" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.717316 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-r2k2r" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.717693 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-l59mx"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.733429 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ggvsr" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.768555 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-6kxv6"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.771150 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbnsn\" (UniqueName: \"kubernetes.io/projected/4906db46-96ab-4aac-8aac-ba1532087aa2-kube-api-access-jbnsn\") pod \"heat-operator-controller-manager-69f49c598c-qtwh6\" (UID: \"4906db46-96ab-4aac-8aac-ba1532087aa2\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qtwh6" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.771191 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvjn9\" (UniqueName: \"kubernetes.io/projected/14b2ecba-fa5a-41f5-90d5-5085e30e277e-kube-api-access-fvjn9\") pod \"keystone-operator-controller-manager-b4d948c87-6kxv6\" (UID: \"14b2ecba-fa5a-41f5-90d5-5085e30e277e\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6kxv6" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.771256 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8bjc\" (UniqueName: \"kubernetes.io/projected/3d51e481-ad0d-4d45-b0ee-7ce02b1c428d-kube-api-access-q8bjc\") pod \"ironic-operator-controller-manager-554564d7fc-l59mx\" (UID: \"3d51e481-ad0d-4d45-b0ee-7ce02b1c428d\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-l59mx" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.771298 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2489\" (UniqueName: \"kubernetes.io/projected/b6a90270-aa6d-4792-96bc-333bff7f15df-kube-api-access-j2489\") pod \"horizon-operator-controller-manager-5b9b8895d5-mvk6x\" (UID: \"b6a90270-aa6d-4792-96bc-333bff7f15df\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-mvk6x" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.771329 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef72a766-0d85-430a-ab0d-f0eda86f582f-cert\") pod \"infra-operator-controller-manager-79d975b745-6d62b\" (UID: \"ef72a766-0d85-430a-ab0d-f0eda86f582f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-6d62b" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.771350 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lg9n\" (UniqueName: \"kubernetes.io/projected/ef72a766-0d85-430a-ab0d-f0eda86f582f-kube-api-access-6lg9n\") pod \"infra-operator-controller-manager-79d975b745-6d62b\" (UID: \"ef72a766-0d85-430a-ab0d-f0eda86f582f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-6d62b" Feb 19 21:16:52 crc kubenswrapper[4886]: E0219 21:16:52.773763 4886 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 21:16:52 crc kubenswrapper[4886]: E0219 21:16:52.773812 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef72a766-0d85-430a-ab0d-f0eda86f582f-cert podName:ef72a766-0d85-430a-ab0d-f0eda86f582f nodeName:}" failed. No retries permitted until 2026-02-19 21:16:53.273795712 +0000 UTC m=+1043.901638762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef72a766-0d85-430a-ab0d-f0eda86f582f-cert") pod "infra-operator-controller-manager-79d975b745-6d62b" (UID: "ef72a766-0d85-430a-ab0d-f0eda86f582f") : secret "infra-operator-webhook-server-cert" not found Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.782798 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-qd57q"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.783963 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qd57q" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.792330 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-lz6wb" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.797987 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dcg9l" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.800559 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-qd57q"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.815179 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbnsn\" (UniqueName: \"kubernetes.io/projected/4906db46-96ab-4aac-8aac-ba1532087aa2-kube-api-access-jbnsn\") pod \"heat-operator-controller-manager-69f49c598c-qtwh6\" (UID: \"4906db46-96ab-4aac-8aac-ba1532087aa2\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qtwh6" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.816047 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2489\" (UniqueName: \"kubernetes.io/projected/b6a90270-aa6d-4792-96bc-333bff7f15df-kube-api-access-j2489\") pod \"horizon-operator-controller-manager-5b9b8895d5-mvk6x\" (UID: \"b6a90270-aa6d-4792-96bc-333bff7f15df\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-mvk6x" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.817862 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lg9n\" (UniqueName: \"kubernetes.io/projected/ef72a766-0d85-430a-ab0d-f0eda86f582f-kube-api-access-6lg9n\") pod \"infra-operator-controller-manager-79d975b745-6d62b\" (UID: \"ef72a766-0d85-430a-ab0d-f0eda86f582f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-6d62b" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.826147 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-s5rp2"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.827290 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-s5rp2" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.829613 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-dm5th" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.830030 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-65x86" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.840754 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2fmz6"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.842148 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2fmz6" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.844225 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-mctj7" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.859988 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-s5rp2"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.871387 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-nj4ct"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.872366 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-nj4ct" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.873278 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kxcc\" (UniqueName: \"kubernetes.io/projected/78de7c34-f842-4938-8bbb-fef238359913-kube-api-access-5kxcc\") pod \"manila-operator-controller-manager-54f6768c69-qd57q\" (UID: \"78de7c34-f842-4938-8bbb-fef238359913\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qd57q" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.873325 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvjn9\" (UniqueName: \"kubernetes.io/projected/14b2ecba-fa5a-41f5-90d5-5085e30e277e-kube-api-access-fvjn9\") pod \"keystone-operator-controller-manager-b4d948c87-6kxv6\" (UID: \"14b2ecba-fa5a-41f5-90d5-5085e30e277e\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6kxv6" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.873364 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrv9v\" (UniqueName: \"kubernetes.io/projected/96ff2cdf-fb2a-4544-a042-f17dcfc808c2-kube-api-access-vrv9v\") pod \"neutron-operator-controller-manager-64ddbf8bb-2fmz6\" (UID: \"96ff2cdf-fb2a-4544-a042-f17dcfc808c2\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2fmz6" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.873405 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8bjc\" (UniqueName: \"kubernetes.io/projected/3d51e481-ad0d-4d45-b0ee-7ce02b1c428d-kube-api-access-q8bjc\") pod \"ironic-operator-controller-manager-554564d7fc-l59mx\" (UID: \"3d51e481-ad0d-4d45-b0ee-7ce02b1c428d\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-l59mx" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.873459 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2vmw\" (UniqueName: \"kubernetes.io/projected/56992c82-2769-4a27-ac41-864dda46aa88-kube-api-access-r2vmw\") pod \"mariadb-operator-controller-manager-6994f66f48-s5rp2\" (UID: \"56992c82-2769-4a27-ac41-864dda46aa88\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-s5rp2" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.879437 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-7l9r6" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.894125 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvjn9\" (UniqueName: \"kubernetes.io/projected/14b2ecba-fa5a-41f5-90d5-5085e30e277e-kube-api-access-fvjn9\") pod \"keystone-operator-controller-manager-b4d948c87-6kxv6\" (UID: \"14b2ecba-fa5a-41f5-90d5-5085e30e277e\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6kxv6" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.907346 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-nj4ct"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.910141 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8bjc\" (UniqueName: \"kubernetes.io/projected/3d51e481-ad0d-4d45-b0ee-7ce02b1c428d-kube-api-access-q8bjc\") pod \"ironic-operator-controller-manager-554564d7fc-l59mx\" (UID: \"3d51e481-ad0d-4d45-b0ee-7ce02b1c428d\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-l59mx" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.914110 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2fmz6"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.925985 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-2cpst"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.927065 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-2cpst" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.929212 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-7cg7f" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.937202 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-2cpst"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.966910 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qtwh6" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.967547 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-7c2qx"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.970233 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-7c2qx" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.973539 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-m9hpw" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.974495 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2vmw\" (UniqueName: \"kubernetes.io/projected/56992c82-2769-4a27-ac41-864dda46aa88-kube-api-access-r2vmw\") pod \"mariadb-operator-controller-manager-6994f66f48-s5rp2\" (UID: \"56992c82-2769-4a27-ac41-864dda46aa88\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-s5rp2" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.974535 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzmqv\" (UniqueName: \"kubernetes.io/projected/5c4a962c-02bb-48ff-9444-db393b42a9b0-kube-api-access-wzmqv\") pod \"nova-operator-controller-manager-567668f5cf-nj4ct\" (UID: \"5c4a962c-02bb-48ff-9444-db393b42a9b0\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-nj4ct" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.974585 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl6kv\" (UniqueName: \"kubernetes.io/projected/fe935d54-8e74-4df9-a450-19df5d20b568-kube-api-access-zl6kv\") pod \"octavia-operator-controller-manager-69f8888797-2cpst\" (UID: \"fe935d54-8e74-4df9-a450-19df5d20b568\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-2cpst" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.974605 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kxcc\" (UniqueName: \"kubernetes.io/projected/78de7c34-f842-4938-8bbb-fef238359913-kube-api-access-5kxcc\") pod \"manila-operator-controller-manager-54f6768c69-qd57q\" (UID: \"78de7c34-f842-4938-8bbb-fef238359913\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qd57q" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.974652 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrv9v\" (UniqueName: \"kubernetes.io/projected/96ff2cdf-fb2a-4544-a042-f17dcfc808c2-kube-api-access-vrv9v\") pod \"neutron-operator-controller-manager-64ddbf8bb-2fmz6\" (UID: \"96ff2cdf-fb2a-4544-a042-f17dcfc808c2\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2fmz6" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.975569 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-mvk6x" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.981626 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5"] Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.982789 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.994634 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 19 21:16:52 crc kubenswrapper[4886]: I0219 21:16:52.994877 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-66h98" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.007359 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-7c2qx"] Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.019725 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kxcc\" (UniqueName: \"kubernetes.io/projected/78de7c34-f842-4938-8bbb-fef238359913-kube-api-access-5kxcc\") pod \"manila-operator-controller-manager-54f6768c69-qd57q\" (UID: \"78de7c34-f842-4938-8bbb-fef238359913\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qd57q" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.027320 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrv9v\" (UniqueName: \"kubernetes.io/projected/96ff2cdf-fb2a-4544-a042-f17dcfc808c2-kube-api-access-vrv9v\") pod \"neutron-operator-controller-manager-64ddbf8bb-2fmz6\" (UID: \"96ff2cdf-fb2a-4544-a042-f17dcfc808c2\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2fmz6" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.033681 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2vmw\" (UniqueName: \"kubernetes.io/projected/56992c82-2769-4a27-ac41-864dda46aa88-kube-api-access-r2vmw\") pod \"mariadb-operator-controller-manager-6994f66f48-s5rp2\" (UID: \"56992c82-2769-4a27-ac41-864dda46aa88\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-s5rp2" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.040613 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-l2kk9"] Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.042197 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-l2kk9" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.048913 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-p2r4z" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.055577 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5"] Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.088663 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-l2kk9"] Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.091543 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-l59mx" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.096853 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-wxjwn"] Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.099547 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wxjwn" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.105900 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl6kv\" (UniqueName: \"kubernetes.io/projected/fe935d54-8e74-4df9-a450-19df5d20b568-kube-api-access-zl6kv\") pod \"octavia-operator-controller-manager-69f8888797-2cpst\" (UID: \"fe935d54-8e74-4df9-a450-19df5d20b568\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-2cpst" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.106762 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt4dq\" (UniqueName: \"kubernetes.io/projected/dbea0765-f1be-4f22-a192-686a73112963-kube-api-access-bt4dq\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5\" (UID: \"dbea0765-f1be-4f22-a192-686a73112963\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.106907 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g797p\" (UniqueName: \"kubernetes.io/projected/6554dbf1-0f46-434a-902d-9aa5bbd055d8-kube-api-access-g797p\") pod \"ovn-operator-controller-manager-d44cf6b75-7c2qx\" (UID: \"6554dbf1-0f46-434a-902d-9aa5bbd055d8\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-7c2qx" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.107109 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dbea0765-f1be-4f22-a192-686a73112963-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5\" (UID: \"dbea0765-f1be-4f22-a192-686a73112963\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.107238 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-5dw7p" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.111378 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mmzz\" (UniqueName: \"kubernetes.io/projected/17769417-6658-4daf-8268-e92194198b5c-kube-api-access-6mmzz\") pod \"placement-operator-controller-manager-8497b45c89-l2kk9\" (UID: \"17769417-6658-4daf-8268-e92194198b5c\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-l2kk9" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.111635 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzmqv\" (UniqueName: \"kubernetes.io/projected/5c4a962c-02bb-48ff-9444-db393b42a9b0-kube-api-access-wzmqv\") pod \"nova-operator-controller-manager-567668f5cf-nj4ct\" (UID: \"5c4a962c-02bb-48ff-9444-db393b42a9b0\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-nj4ct" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.132417 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6kxv6" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.132971 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-wxjwn"] Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.136843 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qd57q" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.141783 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzmqv\" (UniqueName: \"kubernetes.io/projected/5c4a962c-02bb-48ff-9444-db393b42a9b0-kube-api-access-wzmqv\") pod \"nova-operator-controller-manager-567668f5cf-nj4ct\" (UID: \"5c4a962c-02bb-48ff-9444-db393b42a9b0\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-nj4ct" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.144938 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl6kv\" (UniqueName: \"kubernetes.io/projected/fe935d54-8e74-4df9-a450-19df5d20b568-kube-api-access-zl6kv\") pod \"octavia-operator-controller-manager-69f8888797-2cpst\" (UID: \"fe935d54-8e74-4df9-a450-19df5d20b568\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-2cpst" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.150386 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-s5rp2" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.161565 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2fmz6" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.175591 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5484b6858b-lrr24"] Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.181435 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5484b6858b-lrr24" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.186187 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5484b6858b-lrr24"] Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.195237 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-8skzk" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.214815 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-nj4ct" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.217923 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvxgb\" (UniqueName: \"kubernetes.io/projected/16e0b754-3cd3-433d-80c6-11363689e9c3-kube-api-access-xvxgb\") pod \"swift-operator-controller-manager-68f46476f-wxjwn\" (UID: \"16e0b754-3cd3-433d-80c6-11363689e9c3\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-wxjwn" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.218012 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt4dq\" (UniqueName: \"kubernetes.io/projected/dbea0765-f1be-4f22-a192-686a73112963-kube-api-access-bt4dq\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5\" (UID: \"dbea0765-f1be-4f22-a192-686a73112963\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.218046 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g797p\" (UniqueName: \"kubernetes.io/projected/6554dbf1-0f46-434a-902d-9aa5bbd055d8-kube-api-access-g797p\") pod \"ovn-operator-controller-manager-d44cf6b75-7c2qx\" (UID: \"6554dbf1-0f46-434a-902d-9aa5bbd055d8\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-7c2qx" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.218071 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dbea0765-f1be-4f22-a192-686a73112963-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5\" (UID: \"dbea0765-f1be-4f22-a192-686a73112963\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.218124 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mmzz\" (UniqueName: \"kubernetes.io/projected/17769417-6658-4daf-8268-e92194198b5c-kube-api-access-6mmzz\") pod \"placement-operator-controller-manager-8497b45c89-l2kk9\" (UID: \"17769417-6658-4daf-8268-e92194198b5c\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-l2kk9" Feb 19 21:16:53 crc kubenswrapper[4886]: E0219 21:16:53.218778 4886 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 21:16:53 crc kubenswrapper[4886]: E0219 21:16:53.218822 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbea0765-f1be-4f22-a192-686a73112963-cert podName:dbea0765-f1be-4f22-a192-686a73112963 nodeName:}" failed. No retries permitted until 2026-02-19 21:16:53.718809631 +0000 UTC m=+1044.346652681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dbea0765-f1be-4f22-a192-686a73112963-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" (UID: "dbea0765-f1be-4f22-a192-686a73112963") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.244369 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-2cpst" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.252134 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g797p\" (UniqueName: \"kubernetes.io/projected/6554dbf1-0f46-434a-902d-9aa5bbd055d8-kube-api-access-g797p\") pod \"ovn-operator-controller-manager-d44cf6b75-7c2qx\" (UID: \"6554dbf1-0f46-434a-902d-9aa5bbd055d8\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-7c2qx" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.254768 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt4dq\" (UniqueName: \"kubernetes.io/projected/dbea0765-f1be-4f22-a192-686a73112963-kube-api-access-bt4dq\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5\" (UID: \"dbea0765-f1be-4f22-a192-686a73112963\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.259384 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-znmcl"] Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.274165 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-znmcl"] Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.274243 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-znmcl" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.280531 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-vkcq7" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.282876 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mmzz\" (UniqueName: \"kubernetes.io/projected/17769417-6658-4daf-8268-e92194198b5c-kube-api-access-6mmzz\") pod \"placement-operator-controller-manager-8497b45c89-l2kk9\" (UID: \"17769417-6658-4daf-8268-e92194198b5c\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-l2kk9" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.304058 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-znrbn"] Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.305440 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-znrbn" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.313450 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-q56wl" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.322473 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-znrbn"] Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.325455 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef72a766-0d85-430a-ab0d-f0eda86f582f-cert\") pod \"infra-operator-controller-manager-79d975b745-6d62b\" (UID: \"ef72a766-0d85-430a-ab0d-f0eda86f582f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-6d62b" Feb 19 21:16:53 crc kubenswrapper[4886]: E0219 21:16:53.325613 4886 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.331303 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdv5r\" (UniqueName: \"kubernetes.io/projected/67db5487-865b-4ce2-8ade-a87f6909b85d-kube-api-access-wdv5r\") pod \"telemetry-operator-controller-manager-5484b6858b-lrr24\" (UID: \"67db5487-865b-4ce2-8ade-a87f6909b85d\") " pod="openstack-operators/telemetry-operator-controller-manager-5484b6858b-lrr24" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.331495 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvxgb\" (UniqueName: \"kubernetes.io/projected/16e0b754-3cd3-433d-80c6-11363689e9c3-kube-api-access-xvxgb\") pod \"swift-operator-controller-manager-68f46476f-wxjwn\" (UID: \"16e0b754-3cd3-433d-80c6-11363689e9c3\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-wxjwn" Feb 19 21:16:53 crc kubenswrapper[4886]: E0219 21:16:53.331863 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef72a766-0d85-430a-ab0d-f0eda86f582f-cert podName:ef72a766-0d85-430a-ab0d-f0eda86f582f nodeName:}" failed. No retries permitted until 2026-02-19 21:16:54.33184485 +0000 UTC m=+1044.959687900 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef72a766-0d85-430a-ab0d-f0eda86f582f-cert") pod "infra-operator-controller-manager-79d975b745-6d62b" (UID: "ef72a766-0d85-430a-ab0d-f0eda86f582f") : secret "infra-operator-webhook-server-cert" not found Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.349981 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvxgb\" (UniqueName: \"kubernetes.io/projected/16e0b754-3cd3-433d-80c6-11363689e9c3-kube-api-access-xvxgb\") pod \"swift-operator-controller-manager-68f46476f-wxjwn\" (UID: \"16e0b754-3cd3-433d-80c6-11363689e9c3\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-wxjwn" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.356858 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-7c2qx" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.381304 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj"] Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.382918 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.385625 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.385829 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-7ws44" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.385933 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.393187 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj"] Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.398987 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kpd49"] Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.400390 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kpd49" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.405609 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-w4pzx" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.431156 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kpd49"] Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.435621 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqjnq\" (UniqueName: \"kubernetes.io/projected/7d34c65a-18e8-4709-856b-232ceae77630-kube-api-access-mqjnq\") pod \"test-operator-controller-manager-7866795846-znmcl\" (UID: \"7d34c65a-18e8-4709-856b-232ceae77630\") " pod="openstack-operators/test-operator-controller-manager-7866795846-znmcl" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.435699 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltt68\" (UniqueName: \"kubernetes.io/projected/3a08ee98-4149-4379-bb3d-e05dd76f5c8d-kube-api-access-ltt68\") pod \"watcher-operator-controller-manager-5db88f68c-znrbn\" (UID: \"3a08ee98-4149-4379-bb3d-e05dd76f5c8d\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-znrbn" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.435791 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdv5r\" (UniqueName: \"kubernetes.io/projected/67db5487-865b-4ce2-8ade-a87f6909b85d-kube-api-access-wdv5r\") pod \"telemetry-operator-controller-manager-5484b6858b-lrr24\" (UID: \"67db5487-865b-4ce2-8ade-a87f6909b85d\") " pod="openstack-operators/telemetry-operator-controller-manager-5484b6858b-lrr24" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.436343 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-l2kk9" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.455978 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wxjwn" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.457062 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdv5r\" (UniqueName: \"kubernetes.io/projected/67db5487-865b-4ce2-8ade-a87f6909b85d-kube-api-access-wdv5r\") pod \"telemetry-operator-controller-manager-5484b6858b-lrr24\" (UID: \"67db5487-865b-4ce2-8ade-a87f6909b85d\") " pod="openstack-operators/telemetry-operator-controller-manager-5484b6858b-lrr24" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.471990 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-r2k2r"] Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.500341 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-ggvsr"] Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.537332 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqjnq\" (UniqueName: \"kubernetes.io/projected/7d34c65a-18e8-4709-856b-232ceae77630-kube-api-access-mqjnq\") pod \"test-operator-controller-manager-7866795846-znmcl\" (UID: \"7d34c65a-18e8-4709-856b-232ceae77630\") " pod="openstack-operators/test-operator-controller-manager-7866795846-znmcl" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.537385 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-metrics-certs\") pod \"openstack-operator-controller-manager-78fbc88654-kmltj\" (UID: \"997a5ddf-b07d-45c0-a843-a833e93596da\") " pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.537421 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-webhook-certs\") pod \"openstack-operator-controller-manager-78fbc88654-kmltj\" (UID: \"997a5ddf-b07d-45c0-a843-a833e93596da\") " pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.537449 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltt68\" (UniqueName: \"kubernetes.io/projected/3a08ee98-4149-4379-bb3d-e05dd76f5c8d-kube-api-access-ltt68\") pod \"watcher-operator-controller-manager-5db88f68c-znrbn\" (UID: \"3a08ee98-4149-4379-bb3d-e05dd76f5c8d\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-znrbn" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.537466 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf6f8\" (UniqueName: \"kubernetes.io/projected/997a5ddf-b07d-45c0-a843-a833e93596da-kube-api-access-jf6f8\") pod \"openstack-operator-controller-manager-78fbc88654-kmltj\" (UID: \"997a5ddf-b07d-45c0-a843-a833e93596da\") " pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.537496 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz46v\" (UniqueName: \"kubernetes.io/projected/0ce0ddb2-feaa-4719-8674-ba83fcee0a98-kube-api-access-tz46v\") pod \"rabbitmq-cluster-operator-manager-668c99d594-kpd49\" (UID: \"0ce0ddb2-feaa-4719-8674-ba83fcee0a98\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kpd49" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.540863 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5484b6858b-lrr24" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.561300 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltt68\" (UniqueName: \"kubernetes.io/projected/3a08ee98-4149-4379-bb3d-e05dd76f5c8d-kube-api-access-ltt68\") pod \"watcher-operator-controller-manager-5db88f68c-znrbn\" (UID: \"3a08ee98-4149-4379-bb3d-e05dd76f5c8d\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-znrbn" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.564220 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqjnq\" (UniqueName: \"kubernetes.io/projected/7d34c65a-18e8-4709-856b-232ceae77630-kube-api-access-mqjnq\") pod \"test-operator-controller-manager-7866795846-znmcl\" (UID: \"7d34c65a-18e8-4709-856b-232ceae77630\") " pod="openstack-operators/test-operator-controller-manager-7866795846-znmcl" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.639406 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz46v\" (UniqueName: \"kubernetes.io/projected/0ce0ddb2-feaa-4719-8674-ba83fcee0a98-kube-api-access-tz46v\") pod \"rabbitmq-cluster-operator-manager-668c99d594-kpd49\" (UID: \"0ce0ddb2-feaa-4719-8674-ba83fcee0a98\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kpd49" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.639908 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-metrics-certs\") pod \"openstack-operator-controller-manager-78fbc88654-kmltj\" (UID: \"997a5ddf-b07d-45c0-a843-a833e93596da\") " pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.639964 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-webhook-certs\") pod \"openstack-operator-controller-manager-78fbc88654-kmltj\" (UID: \"997a5ddf-b07d-45c0-a843-a833e93596da\") " pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.640011 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf6f8\" (UniqueName: \"kubernetes.io/projected/997a5ddf-b07d-45c0-a843-a833e93596da-kube-api-access-jf6f8\") pod \"openstack-operator-controller-manager-78fbc88654-kmltj\" (UID: \"997a5ddf-b07d-45c0-a843-a833e93596da\") " pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" Feb 19 21:16:53 crc kubenswrapper[4886]: E0219 21:16:53.640469 4886 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 21:16:53 crc kubenswrapper[4886]: E0219 21:16:53.640544 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-metrics-certs podName:997a5ddf-b07d-45c0-a843-a833e93596da nodeName:}" failed. No retries permitted until 2026-02-19 21:16:54.14052377 +0000 UTC m=+1044.768366860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-metrics-certs") pod "openstack-operator-controller-manager-78fbc88654-kmltj" (UID: "997a5ddf-b07d-45c0-a843-a833e93596da") : secret "metrics-server-cert" not found Feb 19 21:16:53 crc kubenswrapper[4886]: E0219 21:16:53.640740 4886 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 21:16:53 crc kubenswrapper[4886]: E0219 21:16:53.640817 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-webhook-certs podName:997a5ddf-b07d-45c0-a843-a833e93596da nodeName:}" failed. No retries permitted until 2026-02-19 21:16:54.140799497 +0000 UTC m=+1044.768642627 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-webhook-certs") pod "openstack-operator-controller-manager-78fbc88654-kmltj" (UID: "997a5ddf-b07d-45c0-a843-a833e93596da") : secret "webhook-server-cert" not found Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.672709 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf6f8\" (UniqueName: \"kubernetes.io/projected/997a5ddf-b07d-45c0-a843-a833e93596da-kube-api-access-jf6f8\") pod \"openstack-operator-controller-manager-78fbc88654-kmltj\" (UID: \"997a5ddf-b07d-45c0-a843-a833e93596da\") " pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.672928 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-znmcl" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.678159 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz46v\" (UniqueName: \"kubernetes.io/projected/0ce0ddb2-feaa-4719-8674-ba83fcee0a98-kube-api-access-tz46v\") pod \"rabbitmq-cluster-operator-manager-668c99d594-kpd49\" (UID: \"0ce0ddb2-feaa-4719-8674-ba83fcee0a98\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kpd49" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.684958 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-znrbn" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.720817 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-dm5th"] Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.737699 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kpd49" Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.741433 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dbea0765-f1be-4f22-a192-686a73112963-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5\" (UID: \"dbea0765-f1be-4f22-a192-686a73112963\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" Feb 19 21:16:53 crc kubenswrapper[4886]: E0219 21:16:53.742603 4886 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 21:16:53 crc kubenswrapper[4886]: E0219 21:16:53.742651 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbea0765-f1be-4f22-a192-686a73112963-cert podName:dbea0765-f1be-4f22-a192-686a73112963 nodeName:}" failed. No retries permitted until 2026-02-19 21:16:54.742634173 +0000 UTC m=+1045.370477223 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dbea0765-f1be-4f22-a192-686a73112963-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" (UID: "dbea0765-f1be-4f22-a192-686a73112963") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 21:16:53 crc kubenswrapper[4886]: I0219 21:16:53.910401 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-dcg9l"] Feb 19 21:16:54 crc kubenswrapper[4886]: I0219 21:16:54.061382 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-dm5th" event={"ID":"6b93dd73-4b64-418b-aa60-511213b8f1fd","Type":"ContainerStarted","Data":"e41492b83aa7caefa393347488d0c590c6436f1fcd022211edac470397c78308"} Feb 19 21:16:54 crc kubenswrapper[4886]: I0219 21:16:54.062549 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-r2k2r" event={"ID":"a87f6938-30e5-4481-ba31-246084feaa8a","Type":"ContainerStarted","Data":"c4cbd72db5b00e83e2df8889db552bcf4b92e3af6b41e182e8e579df9350e5b3"} Feb 19 21:16:54 crc kubenswrapper[4886]: I0219 21:16:54.063805 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dcg9l" event={"ID":"06cb83ff-29f5-438f-87b0-32bb5899552d","Type":"ContainerStarted","Data":"e3c84874c776de8206e6fcbf4e07602aa0a89f2a35730dc337824a672ebd89e8"} Feb 19 21:16:54 crc kubenswrapper[4886]: I0219 21:16:54.065801 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ggvsr" event={"ID":"d5e2840a-8a17-4ddd-92e5-d033222d3dee","Type":"ContainerStarted","Data":"b983d25ff9965bbaf5c258079e8975ca58ddfdb9165c17a8e8df6fd660d3ee5b"} Feb 19 21:16:54 crc kubenswrapper[4886]: I0219 21:16:54.166627 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-metrics-certs\") pod \"openstack-operator-controller-manager-78fbc88654-kmltj\" (UID: \"997a5ddf-b07d-45c0-a843-a833e93596da\") " pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" Feb 19 21:16:54 crc kubenswrapper[4886]: I0219 21:16:54.166693 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-webhook-certs\") pod \"openstack-operator-controller-manager-78fbc88654-kmltj\" (UID: \"997a5ddf-b07d-45c0-a843-a833e93596da\") " pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" Feb 19 21:16:54 crc kubenswrapper[4886]: E0219 21:16:54.166893 4886 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 21:16:54 crc kubenswrapper[4886]: E0219 21:16:54.166939 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-webhook-certs podName:997a5ddf-b07d-45c0-a843-a833e93596da nodeName:}" failed. No retries permitted until 2026-02-19 21:16:55.166926078 +0000 UTC m=+1045.794769128 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-webhook-certs") pod "openstack-operator-controller-manager-78fbc88654-kmltj" (UID: "997a5ddf-b07d-45c0-a843-a833e93596da") : secret "webhook-server-cert" not found Feb 19 21:16:54 crc kubenswrapper[4886]: E0219 21:16:54.167245 4886 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 21:16:54 crc kubenswrapper[4886]: E0219 21:16:54.167287 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-metrics-certs podName:997a5ddf-b07d-45c0-a843-a833e93596da nodeName:}" failed. No retries permitted until 2026-02-19 21:16:55.167278297 +0000 UTC m=+1045.795121347 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-metrics-certs") pod "openstack-operator-controller-manager-78fbc88654-kmltj" (UID: "997a5ddf-b07d-45c0-a843-a833e93596da") : secret "metrics-server-cert" not found Feb 19 21:16:54 crc kubenswrapper[4886]: I0219 21:16:54.183396 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-mvk6x"] Feb 19 21:16:54 crc kubenswrapper[4886]: I0219 21:16:54.206085 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-l59mx"] Feb 19 21:16:54 crc kubenswrapper[4886]: I0219 21:16:54.214170 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-qtwh6"] Feb 19 21:16:54 crc kubenswrapper[4886]: W0219 21:16:54.216178 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4906db46_96ab_4aac_8aac_ba1532087aa2.slice/crio-04c61223d1a3d76c379565514915810508d9f106f12dc3fb52d23d2a11c0eaeb WatchSource:0}: Error finding container 04c61223d1a3d76c379565514915810508d9f106f12dc3fb52d23d2a11c0eaeb: Status 404 returned error can't find the container with id 04c61223d1a3d76c379565514915810508d9f106f12dc3fb52d23d2a11c0eaeb Feb 19 21:16:54 crc kubenswrapper[4886]: I0219 21:16:54.369212 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef72a766-0d85-430a-ab0d-f0eda86f582f-cert\") pod \"infra-operator-controller-manager-79d975b745-6d62b\" (UID: \"ef72a766-0d85-430a-ab0d-f0eda86f582f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-6d62b" Feb 19 21:16:54 crc kubenswrapper[4886]: E0219 21:16:54.369442 4886 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 21:16:54 crc kubenswrapper[4886]: E0219 21:16:54.369547 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef72a766-0d85-430a-ab0d-f0eda86f582f-cert podName:ef72a766-0d85-430a-ab0d-f0eda86f582f nodeName:}" failed. No retries permitted until 2026-02-19 21:16:56.369522994 +0000 UTC m=+1046.997366064 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef72a766-0d85-430a-ab0d-f0eda86f582f-cert") pod "infra-operator-controller-manager-79d975b745-6d62b" (UID: "ef72a766-0d85-430a-ab0d-f0eda86f582f") : secret "infra-operator-webhook-server-cert" not found Feb 19 21:16:54 crc kubenswrapper[4886]: I0219 21:16:54.576352 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-2cpst"] Feb 19 21:16:54 crc kubenswrapper[4886]: I0219 21:16:54.589141 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-6kxv6"] Feb 19 21:16:54 crc kubenswrapper[4886]: I0219 21:16:54.598318 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-s5rp2"] Feb 19 21:16:54 crc kubenswrapper[4886]: I0219 21:16:54.629002 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2fmz6"] Feb 19 21:16:54 crc kubenswrapper[4886]: I0219 21:16:54.629035 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-qd57q"] Feb 19 21:16:54 crc kubenswrapper[4886]: I0219 21:16:54.629047 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-nj4ct"] Feb 19 21:16:54 crc kubenswrapper[4886]: I0219 21:16:54.776039 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dbea0765-f1be-4f22-a192-686a73112963-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5\" (UID: \"dbea0765-f1be-4f22-a192-686a73112963\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" Feb 19 21:16:54 crc kubenswrapper[4886]: E0219 21:16:54.776479 4886 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 21:16:54 crc kubenswrapper[4886]: E0219 21:16:54.776570 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbea0765-f1be-4f22-a192-686a73112963-cert podName:dbea0765-f1be-4f22-a192-686a73112963 nodeName:}" failed. No retries permitted until 2026-02-19 21:16:56.776551202 +0000 UTC m=+1047.404394252 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dbea0765-f1be-4f22-a192-686a73112963-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" (UID: "dbea0765-f1be-4f22-a192-686a73112963") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 21:16:54 crc kubenswrapper[4886]: I0219 21:16:54.993012 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-wxjwn"] Feb 19 21:16:55 crc kubenswrapper[4886]: W0219 21:16:55.012809 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d34c65a_18e8_4709_856b_232ceae77630.slice/crio-3a6bb5c28add4a2e0b96f39942334db4680b66eb9c5713196eb12d23b0776f6f WatchSource:0}: Error finding container 3a6bb5c28add4a2e0b96f39942334db4680b66eb9c5713196eb12d23b0776f6f: Status 404 returned error can't find the container with id 3a6bb5c28add4a2e0b96f39942334db4680b66eb9c5713196eb12d23b0776f6f Feb 19 21:16:55 crc kubenswrapper[4886]: I0219 21:16:55.028061 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-znmcl"] Feb 19 21:16:55 crc kubenswrapper[4886]: W0219 21:16:55.030171 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6554dbf1_0f46_434a_902d_9aa5bbd055d8.slice/crio-b6ea8ed8cf0a7cee33dcdb96b220533ab14e18f355c5f3d679ba458684998060 WatchSource:0}: Error finding container b6ea8ed8cf0a7cee33dcdb96b220533ab14e18f355c5f3d679ba458684998060: Status 404 returned error can't find the container with id b6ea8ed8cf0a7cee33dcdb96b220533ab14e18f355c5f3d679ba458684998060 Feb 19 21:16:55 crc kubenswrapper[4886]: I0219 21:16:55.035336 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-7c2qx"] Feb 19 21:16:55 crc kubenswrapper[4886]: I0219 21:16:55.042730 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5484b6858b-lrr24"] Feb 19 21:16:55 crc kubenswrapper[4886]: I0219 21:16:55.049054 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-l2kk9"] Feb 19 21:16:55 crc kubenswrapper[4886]: W0219 21:16:55.059427 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67db5487_865b_4ce2_8ade_a87f6909b85d.slice/crio-3e26267a40fb389708f3fa059e3c72b61c548a247e65516cc2641ccff7a24489 WatchSource:0}: Error finding container 3e26267a40fb389708f3fa059e3c72b61c548a247e65516cc2641ccff7a24489: Status 404 returned error can't find the container with id 3e26267a40fb389708f3fa059e3c72b61c548a247e65516cc2641ccff7a24489 Feb 19 21:16:55 crc kubenswrapper[4886]: I0219 21:16:55.077811 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-7c2qx" event={"ID":"6554dbf1-0f46-434a-902d-9aa5bbd055d8","Type":"ContainerStarted","Data":"b6ea8ed8cf0a7cee33dcdb96b220533ab14e18f355c5f3d679ba458684998060"} Feb 19 21:16:55 crc kubenswrapper[4886]: I0219 21:16:55.081053 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wxjwn" event={"ID":"16e0b754-3cd3-433d-80c6-11363689e9c3","Type":"ContainerStarted","Data":"1cdb976645e376a498006d081b9060bbab0e207e987fc3a1849ee61954a618f8"} Feb 19 21:16:55 crc kubenswrapper[4886]: I0219 21:16:55.082929 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-mvk6x" event={"ID":"b6a90270-aa6d-4792-96bc-333bff7f15df","Type":"ContainerStarted","Data":"59b4c0c1dbe7800f798bc988d297169677602c933d62b668aa7e21380731e453"} Feb 19 21:16:55 crc kubenswrapper[4886]: I0219 21:16:55.084957 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6kxv6" event={"ID":"14b2ecba-fa5a-41f5-90d5-5085e30e277e","Type":"ContainerStarted","Data":"6738082785b49e2555138857124f4cbb695b9e335ffa8a90df8e8989faf67cb4"} Feb 19 21:16:55 crc kubenswrapper[4886]: I0219 21:16:55.086632 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-znmcl" event={"ID":"7d34c65a-18e8-4709-856b-232ceae77630","Type":"ContainerStarted","Data":"3a6bb5c28add4a2e0b96f39942334db4680b66eb9c5713196eb12d23b0776f6f"} Feb 19 21:16:55 crc kubenswrapper[4886]: I0219 21:16:55.087870 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5484b6858b-lrr24" event={"ID":"67db5487-865b-4ce2-8ade-a87f6909b85d","Type":"ContainerStarted","Data":"3e26267a40fb389708f3fa059e3c72b61c548a247e65516cc2641ccff7a24489"} Feb 19 21:16:55 crc kubenswrapper[4886]: I0219 21:16:55.088845 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qd57q" event={"ID":"78de7c34-f842-4938-8bbb-fef238359913","Type":"ContainerStarted","Data":"3aacdb5aaf3660d2d59333442c6c3f7216130225828ed6a31fa496f9108a7ba1"} Feb 19 21:16:55 crc kubenswrapper[4886]: I0219 21:16:55.098189 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-l2kk9" event={"ID":"17769417-6658-4daf-8268-e92194198b5c","Type":"ContainerStarted","Data":"8aaebf310b68b5240bb6928d548ea9f4119077166327a9b65a121a0516da4e78"} Feb 19 21:16:55 crc kubenswrapper[4886]: I0219 21:16:55.100249 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-2cpst" event={"ID":"fe935d54-8e74-4df9-a450-19df5d20b568","Type":"ContainerStarted","Data":"c816a4c1cb690b9f16dcbf8eba37f35fc36695b3fee42644f23d305e85281c3b"} Feb 19 21:16:55 crc kubenswrapper[4886]: I0219 21:16:55.101422 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-s5rp2" event={"ID":"56992c82-2769-4a27-ac41-864dda46aa88","Type":"ContainerStarted","Data":"7c6f3c44f997f7410da4504fcdee1694fb91f4bebe6a8b95ea5c8136d424897e"} Feb 19 21:16:55 crc kubenswrapper[4886]: I0219 21:16:55.102599 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qtwh6" event={"ID":"4906db46-96ab-4aac-8aac-ba1532087aa2","Type":"ContainerStarted","Data":"04c61223d1a3d76c379565514915810508d9f106f12dc3fb52d23d2a11c0eaeb"} Feb 19 21:16:55 crc kubenswrapper[4886]: I0219 21:16:55.106500 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-nj4ct" event={"ID":"5c4a962c-02bb-48ff-9444-db393b42a9b0","Type":"ContainerStarted","Data":"842ff3489cc4ea9199b8a687bc92ae772931bd5f69016a07e50b4b8a9ca8a02c"} Feb 19 21:16:55 crc kubenswrapper[4886]: I0219 21:16:55.108295 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2fmz6" event={"ID":"96ff2cdf-fb2a-4544-a042-f17dcfc808c2","Type":"ContainerStarted","Data":"94cd19e66bb9848d3f16613578cb4a8909eb058953c99ed2296186538b4a5563"} Feb 19 21:16:55 crc kubenswrapper[4886]: I0219 21:16:55.109969 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-l59mx" event={"ID":"3d51e481-ad0d-4d45-b0ee-7ce02b1c428d","Type":"ContainerStarted","Data":"956b5f34a45f6ea72fb7b80e8e75c91038e824bde50894b74aee11d8a6b88396"} Feb 19 21:16:55 crc kubenswrapper[4886]: I0219 21:16:55.199082 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-metrics-certs\") pod \"openstack-operator-controller-manager-78fbc88654-kmltj\" (UID: \"997a5ddf-b07d-45c0-a843-a833e93596da\") " pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" Feb 19 21:16:55 crc kubenswrapper[4886]: I0219 21:16:55.199201 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-webhook-certs\") pod \"openstack-operator-controller-manager-78fbc88654-kmltj\" (UID: \"997a5ddf-b07d-45c0-a843-a833e93596da\") " pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" Feb 19 21:16:55 crc kubenswrapper[4886]: E0219 21:16:55.199643 4886 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 21:16:55 crc kubenswrapper[4886]: E0219 21:16:55.199737 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-metrics-certs podName:997a5ddf-b07d-45c0-a843-a833e93596da nodeName:}" failed. No retries permitted until 2026-02-19 21:16:57.199713778 +0000 UTC m=+1047.827556888 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-metrics-certs") pod "openstack-operator-controller-manager-78fbc88654-kmltj" (UID: "997a5ddf-b07d-45c0-a843-a833e93596da") : secret "metrics-server-cert" not found Feb 19 21:16:55 crc kubenswrapper[4886]: E0219 21:16:55.200370 4886 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 21:16:55 crc kubenswrapper[4886]: E0219 21:16:55.200519 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-webhook-certs podName:997a5ddf-b07d-45c0-a843-a833e93596da nodeName:}" failed. No retries permitted until 2026-02-19 21:16:57.200454317 +0000 UTC m=+1047.828297367 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-webhook-certs") pod "openstack-operator-controller-manager-78fbc88654-kmltj" (UID: "997a5ddf-b07d-45c0-a843-a833e93596da") : secret "webhook-server-cert" not found Feb 19 21:16:55 crc kubenswrapper[4886]: I0219 21:16:55.326317 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kpd49"] Feb 19 21:16:55 crc kubenswrapper[4886]: I0219 21:16:55.332301 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-znrbn"] Feb 19 21:16:55 crc kubenswrapper[4886]: W0219 21:16:55.345923 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ce0ddb2_feaa_4719_8674_ba83fcee0a98.slice/crio-ff7c204f818305811b10d788001df39c18da11cd23afbeb9ec303d4a4a72604b WatchSource:0}: Error finding container ff7c204f818305811b10d788001df39c18da11cd23afbeb9ec303d4a4a72604b: Status 404 returned error can't find the container with id ff7c204f818305811b10d788001df39c18da11cd23afbeb9ec303d4a4a72604b Feb 19 21:16:55 crc kubenswrapper[4886]: W0219 21:16:55.368429 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a08ee98_4149_4379_bb3d_e05dd76f5c8d.slice/crio-7fc86ca422d1a042748ff6e61341c62dc2b1ef1a1febe499f9db4e3c1da6c84e WatchSource:0}: Error finding container 7fc86ca422d1a042748ff6e61341c62dc2b1ef1a1febe499f9db4e3c1da6c84e: Status 404 returned error can't find the container with id 7fc86ca422d1a042748ff6e61341c62dc2b1ef1a1febe499f9db4e3c1da6c84e Feb 19 21:16:55 crc kubenswrapper[4886]: E0219 21:16:55.373757 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ltt68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-znrbn_openstack-operators(3a08ee98-4149-4379-bb3d-e05dd76f5c8d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 21:16:55 crc kubenswrapper[4886]: E0219 21:16:55.375086 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-znrbn" podUID="3a08ee98-4149-4379-bb3d-e05dd76f5c8d" Feb 19 21:16:56 crc kubenswrapper[4886]: I0219 21:16:56.120911 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kpd49" event={"ID":"0ce0ddb2-feaa-4719-8674-ba83fcee0a98","Type":"ContainerStarted","Data":"ff7c204f818305811b10d788001df39c18da11cd23afbeb9ec303d4a4a72604b"} Feb 19 21:16:56 crc kubenswrapper[4886]: I0219 21:16:56.122638 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-znrbn" event={"ID":"3a08ee98-4149-4379-bb3d-e05dd76f5c8d","Type":"ContainerStarted","Data":"7fc86ca422d1a042748ff6e61341c62dc2b1ef1a1febe499f9db4e3c1da6c84e"} Feb 19 21:16:56 crc kubenswrapper[4886]: E0219 21:16:56.125443 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-znrbn" podUID="3a08ee98-4149-4379-bb3d-e05dd76f5c8d" Feb 19 21:16:56 crc kubenswrapper[4886]: I0219 21:16:56.434067 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef72a766-0d85-430a-ab0d-f0eda86f582f-cert\") pod \"infra-operator-controller-manager-79d975b745-6d62b\" (UID: \"ef72a766-0d85-430a-ab0d-f0eda86f582f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-6d62b" Feb 19 21:16:56 crc kubenswrapper[4886]: E0219 21:16:56.434243 4886 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 21:16:56 crc kubenswrapper[4886]: E0219 21:16:56.434324 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef72a766-0d85-430a-ab0d-f0eda86f582f-cert podName:ef72a766-0d85-430a-ab0d-f0eda86f582f nodeName:}" failed. No retries permitted until 2026-02-19 21:17:00.434304704 +0000 UTC m=+1051.062147754 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef72a766-0d85-430a-ab0d-f0eda86f582f-cert") pod "infra-operator-controller-manager-79d975b745-6d62b" (UID: "ef72a766-0d85-430a-ab0d-f0eda86f582f") : secret "infra-operator-webhook-server-cert" not found Feb 19 21:16:56 crc kubenswrapper[4886]: I0219 21:16:56.859075 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dbea0765-f1be-4f22-a192-686a73112963-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5\" (UID: \"dbea0765-f1be-4f22-a192-686a73112963\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" Feb 19 21:16:56 crc kubenswrapper[4886]: E0219 21:16:56.859857 4886 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 21:16:56 crc kubenswrapper[4886]: E0219 21:16:56.859942 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbea0765-f1be-4f22-a192-686a73112963-cert podName:dbea0765-f1be-4f22-a192-686a73112963 nodeName:}" failed. No retries permitted until 2026-02-19 21:17:00.859926092 +0000 UTC m=+1051.487769142 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dbea0765-f1be-4f22-a192-686a73112963-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" (UID: "dbea0765-f1be-4f22-a192-686a73112963") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 21:16:57 crc kubenswrapper[4886]: E0219 21:16:57.148569 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-znrbn" podUID="3a08ee98-4149-4379-bb3d-e05dd76f5c8d" Feb 19 21:16:57 crc kubenswrapper[4886]: I0219 21:16:57.266948 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-metrics-certs\") pod \"openstack-operator-controller-manager-78fbc88654-kmltj\" (UID: \"997a5ddf-b07d-45c0-a843-a833e93596da\") " pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" Feb 19 21:16:57 crc kubenswrapper[4886]: I0219 21:16:57.267008 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-webhook-certs\") pod \"openstack-operator-controller-manager-78fbc88654-kmltj\" (UID: \"997a5ddf-b07d-45c0-a843-a833e93596da\") " pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" Feb 19 21:16:57 crc kubenswrapper[4886]: E0219 21:16:57.267218 4886 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 21:16:57 crc kubenswrapper[4886]: E0219 21:16:57.267479 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-metrics-certs podName:997a5ddf-b07d-45c0-a843-a833e93596da nodeName:}" failed. No retries permitted until 2026-02-19 21:17:01.267429172 +0000 UTC m=+1051.895272212 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-metrics-certs") pod "openstack-operator-controller-manager-78fbc88654-kmltj" (UID: "997a5ddf-b07d-45c0-a843-a833e93596da") : secret "metrics-server-cert" not found Feb 19 21:16:57 crc kubenswrapper[4886]: E0219 21:16:57.267241 4886 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 21:16:57 crc kubenswrapper[4886]: E0219 21:16:57.267775 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-webhook-certs podName:997a5ddf-b07d-45c0-a843-a833e93596da nodeName:}" failed. No retries permitted until 2026-02-19 21:17:01.26775618 +0000 UTC m=+1051.895599270 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-webhook-certs") pod "openstack-operator-controller-manager-78fbc88654-kmltj" (UID: "997a5ddf-b07d-45c0-a843-a833e93596da") : secret "webhook-server-cert" not found Feb 19 21:17:00 crc kubenswrapper[4886]: I0219 21:17:00.532553 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef72a766-0d85-430a-ab0d-f0eda86f582f-cert\") pod \"infra-operator-controller-manager-79d975b745-6d62b\" (UID: \"ef72a766-0d85-430a-ab0d-f0eda86f582f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-6d62b" Feb 19 21:17:00 crc kubenswrapper[4886]: E0219 21:17:00.533022 4886 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 21:17:00 crc kubenswrapper[4886]: E0219 21:17:00.533068 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef72a766-0d85-430a-ab0d-f0eda86f582f-cert podName:ef72a766-0d85-430a-ab0d-f0eda86f582f nodeName:}" failed. No retries permitted until 2026-02-19 21:17:08.533054834 +0000 UTC m=+1059.160897884 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef72a766-0d85-430a-ab0d-f0eda86f582f-cert") pod "infra-operator-controller-manager-79d975b745-6d62b" (UID: "ef72a766-0d85-430a-ab0d-f0eda86f582f") : secret "infra-operator-webhook-server-cert" not found Feb 19 21:17:00 crc kubenswrapper[4886]: I0219 21:17:00.945218 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dbea0765-f1be-4f22-a192-686a73112963-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5\" (UID: \"dbea0765-f1be-4f22-a192-686a73112963\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" Feb 19 21:17:00 crc kubenswrapper[4886]: E0219 21:17:00.945380 4886 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 21:17:00 crc kubenswrapper[4886]: E0219 21:17:00.945455 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbea0765-f1be-4f22-a192-686a73112963-cert podName:dbea0765-f1be-4f22-a192-686a73112963 nodeName:}" failed. No retries permitted until 2026-02-19 21:17:08.945438627 +0000 UTC m=+1059.573281677 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dbea0765-f1be-4f22-a192-686a73112963-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" (UID: "dbea0765-f1be-4f22-a192-686a73112963") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 21:17:01 crc kubenswrapper[4886]: I0219 21:17:01.353155 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-webhook-certs\") pod \"openstack-operator-controller-manager-78fbc88654-kmltj\" (UID: \"997a5ddf-b07d-45c0-a843-a833e93596da\") " pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" Feb 19 21:17:01 crc kubenswrapper[4886]: I0219 21:17:01.353342 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-metrics-certs\") pod \"openstack-operator-controller-manager-78fbc88654-kmltj\" (UID: \"997a5ddf-b07d-45c0-a843-a833e93596da\") " pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" Feb 19 21:17:01 crc kubenswrapper[4886]: E0219 21:17:01.353465 4886 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 21:17:01 crc kubenswrapper[4886]: E0219 21:17:01.353512 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-metrics-certs podName:997a5ddf-b07d-45c0-a843-a833e93596da nodeName:}" failed. No retries permitted until 2026-02-19 21:17:09.353496991 +0000 UTC m=+1059.981340041 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-metrics-certs") pod "openstack-operator-controller-manager-78fbc88654-kmltj" (UID: "997a5ddf-b07d-45c0-a843-a833e93596da") : secret "metrics-server-cert" not found Feb 19 21:17:01 crc kubenswrapper[4886]: E0219 21:17:01.353829 4886 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 21:17:01 crc kubenswrapper[4886]: E0219 21:17:01.353859 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-webhook-certs podName:997a5ddf-b07d-45c0-a843-a833e93596da nodeName:}" failed. No retries permitted until 2026-02-19 21:17:09.35385187 +0000 UTC m=+1059.981694920 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-webhook-certs") pod "openstack-operator-controller-manager-78fbc88654-kmltj" (UID: "997a5ddf-b07d-45c0-a843-a833e93596da") : secret "webhook-server-cert" not found Feb 19 21:17:08 crc kubenswrapper[4886]: I0219 21:17:08.615344 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef72a766-0d85-430a-ab0d-f0eda86f582f-cert\") pod \"infra-operator-controller-manager-79d975b745-6d62b\" (UID: \"ef72a766-0d85-430a-ab0d-f0eda86f582f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-6d62b" Feb 19 21:17:08 crc kubenswrapper[4886]: I0219 21:17:08.625756 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef72a766-0d85-430a-ab0d-f0eda86f582f-cert\") pod \"infra-operator-controller-manager-79d975b745-6d62b\" (UID: \"ef72a766-0d85-430a-ab0d-f0eda86f582f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-6d62b" Feb 19 21:17:08 crc kubenswrapper[4886]: I0219 21:17:08.632311 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-6d62b" Feb 19 21:17:09 crc kubenswrapper[4886]: I0219 21:17:09.023047 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dbea0765-f1be-4f22-a192-686a73112963-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5\" (UID: \"dbea0765-f1be-4f22-a192-686a73112963\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" Feb 19 21:17:09 crc kubenswrapper[4886]: I0219 21:17:09.028192 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dbea0765-f1be-4f22-a192-686a73112963-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5\" (UID: \"dbea0765-f1be-4f22-a192-686a73112963\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" Feb 19 21:17:09 crc kubenswrapper[4886]: I0219 21:17:09.307495 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" Feb 19 21:17:09 crc kubenswrapper[4886]: I0219 21:17:09.430542 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-metrics-certs\") pod \"openstack-operator-controller-manager-78fbc88654-kmltj\" (UID: \"997a5ddf-b07d-45c0-a843-a833e93596da\") " pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" Feb 19 21:17:09 crc kubenswrapper[4886]: I0219 21:17:09.430607 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-webhook-certs\") pod \"openstack-operator-controller-manager-78fbc88654-kmltj\" (UID: \"997a5ddf-b07d-45c0-a843-a833e93596da\") " pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" Feb 19 21:17:09 crc kubenswrapper[4886]: E0219 21:17:09.430735 4886 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 21:17:09 crc kubenswrapper[4886]: E0219 21:17:09.430875 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-metrics-certs podName:997a5ddf-b07d-45c0-a843-a833e93596da nodeName:}" failed. No retries permitted until 2026-02-19 21:17:25.430848401 +0000 UTC m=+1076.058691461 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-metrics-certs") pod "openstack-operator-controller-manager-78fbc88654-kmltj" (UID: "997a5ddf-b07d-45c0-a843-a833e93596da") : secret "metrics-server-cert" not found Feb 19 21:17:09 crc kubenswrapper[4886]: E0219 21:17:09.430991 4886 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 21:17:09 crc kubenswrapper[4886]: E0219 21:17:09.431146 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-webhook-certs podName:997a5ddf-b07d-45c0-a843-a833e93596da nodeName:}" failed. No retries permitted until 2026-02-19 21:17:25.431109198 +0000 UTC m=+1076.058952448 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-webhook-certs") pod "openstack-operator-controller-manager-78fbc88654-kmltj" (UID: "997a5ddf-b07d-45c0-a843-a833e93596da") : secret "webhook-server-cert" not found Feb 19 21:17:10 crc kubenswrapper[4886]: E0219 21:17:10.325359 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" Feb 19 21:17:10 crc kubenswrapper[4886]: E0219 21:17:10.326633 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5kxcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-qd57q_openstack-operators(78de7c34-f842-4938-8bbb-fef238359913): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:17:10 crc kubenswrapper[4886]: E0219 21:17:10.327990 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qd57q" podUID="78de7c34-f842-4938-8bbb-fef238359913" Feb 19 21:17:11 crc kubenswrapper[4886]: E0219 21:17:11.283333 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qd57q" podUID="78de7c34-f842-4938-8bbb-fef238359913" Feb 19 21:17:11 crc kubenswrapper[4886]: E0219 21:17:11.440673 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" Feb 19 21:17:11 crc kubenswrapper[4886]: E0219 21:17:11.441099 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-76ct4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-5d946d989d-ggvsr_openstack-operators(d5e2840a-8a17-4ddd-92e5-d033222d3dee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:17:11 crc kubenswrapper[4886]: E0219 21:17:11.442296 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ggvsr" podUID="d5e2840a-8a17-4ddd-92e5-d033222d3dee" Feb 19 21:17:12 crc kubenswrapper[4886]: E0219 21:17:12.041936 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 19 21:17:12 crc kubenswrapper[4886]: E0219 21:17:12.042120 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r2vmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-s5rp2_openstack-operators(56992c82-2769-4a27-ac41-864dda46aa88): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:17:12 crc kubenswrapper[4886]: E0219 21:17:12.043308 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-s5rp2" podUID="56992c82-2769-4a27-ac41-864dda46aa88" Feb 19 21:17:12 crc kubenswrapper[4886]: E0219 21:17:12.297881 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-s5rp2" podUID="56992c82-2769-4a27-ac41-864dda46aa88" Feb 19 21:17:12 crc kubenswrapper[4886]: E0219 21:17:12.297712 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ggvsr" podUID="d5e2840a-8a17-4ddd-92e5-d033222d3dee" Feb 19 21:17:12 crc kubenswrapper[4886]: E0219 21:17:12.533233 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" Feb 19 21:17:12 crc kubenswrapper[4886]: E0219 21:17:12.533458 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j2489,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-mvk6x_openstack-operators(b6a90270-aa6d-4792-96bc-333bff7f15df): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:17:12 crc kubenswrapper[4886]: E0219 21:17:12.534579 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-mvk6x" podUID="b6a90270-aa6d-4792-96bc-333bff7f15df" Feb 19 21:17:13 crc kubenswrapper[4886]: E0219 21:17:13.303324 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-mvk6x" podUID="b6a90270-aa6d-4792-96bc-333bff7f15df" Feb 19 21:17:15 crc kubenswrapper[4886]: E0219 21:17:15.137259 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" Feb 19 21:17:15 crc kubenswrapper[4886]: E0219 21:17:15.137502 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8vg2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d8bf5c495-dcg9l_openstack-operators(06cb83ff-29f5-438f-87b0-32bb5899552d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:17:15 crc kubenswrapper[4886]: E0219 21:17:15.138860 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dcg9l" podUID="06cb83ff-29f5-438f-87b0-32bb5899552d" Feb 19 21:17:15 crc kubenswrapper[4886]: E0219 21:17:15.325432 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dcg9l" podUID="06cb83ff-29f5-438f-87b0-32bb5899552d" Feb 19 21:17:15 crc kubenswrapper[4886]: E0219 21:17:15.695122 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" Feb 19 21:17:15 crc kubenswrapper[4886]: E0219 21:17:15.695506 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jbnsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69f49c598c-qtwh6_openstack-operators(4906db46-96ab-4aac-8aac-ba1532087aa2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:17:15 crc kubenswrapper[4886]: E0219 21:17:15.696686 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qtwh6" podUID="4906db46-96ab-4aac-8aac-ba1532087aa2" Feb 19 21:17:16 crc kubenswrapper[4886]: E0219 21:17:16.333732 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qtwh6" podUID="4906db46-96ab-4aac-8aac-ba1532087aa2" Feb 19 21:17:17 crc kubenswrapper[4886]: E0219 21:17:17.514533 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" Feb 19 21:17:17 crc kubenswrapper[4886]: E0219 21:17:17.514832 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g797p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-7c2qx_openstack-operators(6554dbf1-0f46-434a-902d-9aa5bbd055d8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:17:17 crc kubenswrapper[4886]: E0219 21:17:17.516050 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-7c2qx" podUID="6554dbf1-0f46-434a-902d-9aa5bbd055d8" Feb 19 21:17:18 crc kubenswrapper[4886]: I0219 21:17:18.325314 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:17:18 crc kubenswrapper[4886]: I0219 21:17:18.325702 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:17:18 crc kubenswrapper[4886]: E0219 21:17:18.355712 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-7c2qx" podUID="6554dbf1-0f46-434a-902d-9aa5bbd055d8" Feb 19 21:17:19 crc kubenswrapper[4886]: E0219 21:17:19.550903 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" Feb 19 21:17:19 crc kubenswrapper[4886]: E0219 21:17:19.551150 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6mmzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-l2kk9_openstack-operators(17769417-6658-4daf-8268-e92194198b5c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:17:19 crc kubenswrapper[4886]: E0219 21:17:19.554614 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-l2kk9" podUID="17769417-6658-4daf-8268-e92194198b5c" Feb 19 21:17:20 crc kubenswrapper[4886]: E0219 21:17:20.368856 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-l2kk9" podUID="17769417-6658-4daf-8268-e92194198b5c" Feb 19 21:17:23 crc kubenswrapper[4886]: E0219 21:17:23.476631 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" Feb 19 21:17:23 crc kubenswrapper[4886]: E0219 21:17:23.477504 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mqjnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-znmcl_openstack-operators(7d34c65a-18e8-4709-856b-232ceae77630): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:17:23 crc kubenswrapper[4886]: E0219 21:17:23.479666 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7866795846-znmcl" podUID="7d34c65a-18e8-4709-856b-232ceae77630" Feb 19 21:17:24 crc kubenswrapper[4886]: E0219 21:17:24.037734 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" Feb 19 21:17:24 crc kubenswrapper[4886]: E0219 21:17:24.037952 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7kxpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-868647ff47-r2k2r_openstack-operators(a87f6938-30e5-4481-ba31-246084feaa8a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:17:24 crc kubenswrapper[4886]: E0219 21:17:24.039160 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-r2k2r" podUID="a87f6938-30e5-4481-ba31-246084feaa8a" Feb 19 21:17:24 crc kubenswrapper[4886]: E0219 21:17:24.406671 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-znmcl" podUID="7d34c65a-18e8-4709-856b-232ceae77630" Feb 19 21:17:24 crc kubenswrapper[4886]: E0219 21:17:24.406700 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-r2k2r" podUID="a87f6938-30e5-4481-ba31-246084feaa8a" Feb 19 21:17:24 crc kubenswrapper[4886]: E0219 21:17:24.571222 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" Feb 19 21:17:24 crc kubenswrapper[4886]: E0219 21:17:24.571768 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q8bjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-l59mx_openstack-operators(3d51e481-ad0d-4d45-b0ee-7ce02b1c428d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:17:24 crc kubenswrapper[4886]: E0219 21:17:24.572936 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-l59mx" podUID="3d51e481-ad0d-4d45-b0ee-7ce02b1c428d" Feb 19 21:17:25 crc kubenswrapper[4886]: E0219 21:17:25.094164 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 19 21:17:25 crc kubenswrapper[4886]: E0219 21:17:25.094624 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wzmqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-nj4ct_openstack-operators(5c4a962c-02bb-48ff-9444-db393b42a9b0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:17:25 crc kubenswrapper[4886]: E0219 21:17:25.095799 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-nj4ct" podUID="5c4a962c-02bb-48ff-9444-db393b42a9b0" Feb 19 21:17:25 crc kubenswrapper[4886]: E0219 21:17:25.420086 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-nj4ct" podUID="5c4a962c-02bb-48ff-9444-db393b42a9b0" Feb 19 21:17:25 crc kubenswrapper[4886]: E0219 21:17:25.420134 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-l59mx" podUID="3d51e481-ad0d-4d45-b0ee-7ce02b1c428d" Feb 19 21:17:25 crc kubenswrapper[4886]: I0219 21:17:25.449934 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-webhook-certs\") pod \"openstack-operator-controller-manager-78fbc88654-kmltj\" (UID: \"997a5ddf-b07d-45c0-a843-a833e93596da\") " pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" Feb 19 21:17:25 crc kubenswrapper[4886]: I0219 21:17:25.452712 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-metrics-certs\") pod \"openstack-operator-controller-manager-78fbc88654-kmltj\" (UID: \"997a5ddf-b07d-45c0-a843-a833e93596da\") " pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" Feb 19 21:17:25 crc kubenswrapper[4886]: I0219 21:17:25.464838 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-metrics-certs\") pod \"openstack-operator-controller-manager-78fbc88654-kmltj\" (UID: \"997a5ddf-b07d-45c0-a843-a833e93596da\") " pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" Feb 19 21:17:25 crc kubenswrapper[4886]: I0219 21:17:25.472340 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/997a5ddf-b07d-45c0-a843-a833e93596da-webhook-certs\") pod \"openstack-operator-controller-manager-78fbc88654-kmltj\" (UID: \"997a5ddf-b07d-45c0-a843-a833e93596da\") " pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" Feb 19 21:17:25 crc kubenswrapper[4886]: I0219 21:17:25.521590 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" Feb 19 21:17:25 crc kubenswrapper[4886]: E0219 21:17:25.765990 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 19 21:17:25 crc kubenswrapper[4886]: E0219 21:17:25.766286 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tz46v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-kpd49_openstack-operators(0ce0ddb2-feaa-4719-8674-ba83fcee0a98): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:17:25 crc kubenswrapper[4886]: E0219 21:17:25.767488 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kpd49" podUID="0ce0ddb2-feaa-4719-8674-ba83fcee0a98" Feb 19 21:17:26 crc kubenswrapper[4886]: E0219 21:17:26.364298 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 19 21:17:26 crc kubenswrapper[4886]: E0219 21:17:26.364841 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fvjn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-6kxv6_openstack-operators(14b2ecba-fa5a-41f5-90d5-5085e30e277e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:17:26 crc kubenswrapper[4886]: E0219 21:17:26.366697 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6kxv6" podUID="14b2ecba-fa5a-41f5-90d5-5085e30e277e" Feb 19 21:17:26 crc kubenswrapper[4886]: E0219 21:17:26.430549 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kpd49" podUID="0ce0ddb2-feaa-4719-8674-ba83fcee0a98" Feb 19 21:17:26 crc kubenswrapper[4886]: E0219 21:17:26.430668 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6kxv6" podUID="14b2ecba-fa5a-41f5-90d5-5085e30e277e" Feb 19 21:17:26 crc kubenswrapper[4886]: E0219 21:17:26.497627 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.17:5001/openstack-k8s-operators/telemetry-operator:b83e41b73854b29b5d6860a1d9e9ac7611640781" Feb 19 21:17:26 crc kubenswrapper[4886]: E0219 21:17:26.497676 4886 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.17:5001/openstack-k8s-operators/telemetry-operator:b83e41b73854b29b5d6860a1d9e9ac7611640781" Feb 19 21:17:26 crc kubenswrapper[4886]: E0219 21:17:26.497848 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.17:5001/openstack-k8s-operators/telemetry-operator:b83e41b73854b29b5d6860a1d9e9ac7611640781,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wdv5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5484b6858b-lrr24_openstack-operators(67db5487-865b-4ce2-8ade-a87f6909b85d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:17:26 crc kubenswrapper[4886]: E0219 21:17:26.499431 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-5484b6858b-lrr24" podUID="67db5487-865b-4ce2-8ade-a87f6909b85d" Feb 19 21:17:27 crc kubenswrapper[4886]: I0219 21:17:27.060037 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-6d62b"] Feb 19 21:17:27 crc kubenswrapper[4886]: I0219 21:17:27.073365 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj"] Feb 19 21:17:27 crc kubenswrapper[4886]: I0219 21:17:27.085625 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5"] Feb 19 21:17:27 crc kubenswrapper[4886]: I0219 21:17:27.448715 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2fmz6" event={"ID":"96ff2cdf-fb2a-4544-a042-f17dcfc808c2","Type":"ContainerStarted","Data":"e0c40753e777f7d6027c790959a53771e79080ab18d1b4d97d611b2bf39abbb1"} Feb 19 21:17:27 crc kubenswrapper[4886]: I0219 21:17:27.448845 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2fmz6" Feb 19 21:17:27 crc kubenswrapper[4886]: I0219 21:17:27.454446 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qd57q" event={"ID":"78de7c34-f842-4938-8bbb-fef238359913","Type":"ContainerStarted","Data":"f0bb1299772e75c62ca3943634ef01671e75490270c208e3397f50f3abaecb2e"} Feb 19 21:17:27 crc kubenswrapper[4886]: I0219 21:17:27.454664 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qd57q" Feb 19 21:17:27 crc kubenswrapper[4886]: I0219 21:17:27.455521 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" event={"ID":"dbea0765-f1be-4f22-a192-686a73112963","Type":"ContainerStarted","Data":"21bf28a3d50f3a4fac61b8d7d4278f4baf4c864ed334f880e2b8a529deb5c42d"} Feb 19 21:17:27 crc kubenswrapper[4886]: I0219 21:17:27.456336 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-6d62b" event={"ID":"ef72a766-0d85-430a-ab0d-f0eda86f582f","Type":"ContainerStarted","Data":"29ea5d91d6fc9e5a3a8c980c35a67addbf3b2ec419336ea822144bf303ac9aff"} Feb 19 21:17:27 crc kubenswrapper[4886]: I0219 21:17:27.457555 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-dm5th" event={"ID":"6b93dd73-4b64-418b-aa60-511213b8f1fd","Type":"ContainerStarted","Data":"f6c2e3784003a1a30e1f5e786c2f70899394180b8460567d18c8e81d16ff70a0"} Feb 19 21:17:27 crc kubenswrapper[4886]: I0219 21:17:27.457643 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-dm5th" Feb 19 21:17:27 crc kubenswrapper[4886]: I0219 21:17:27.460244 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-znrbn" event={"ID":"3a08ee98-4149-4379-bb3d-e05dd76f5c8d","Type":"ContainerStarted","Data":"552937146fd1888e6ba6a066200d23fcd03c28298a5c34fdda8a26ab51337db6"} Feb 19 21:17:27 crc kubenswrapper[4886]: I0219 21:17:27.460455 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-znrbn" Feb 19 21:17:27 crc kubenswrapper[4886]: I0219 21:17:27.461423 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" event={"ID":"997a5ddf-b07d-45c0-a843-a833e93596da","Type":"ContainerStarted","Data":"ae0e4289d6880896f0f1c19daff97dacb083f66b674adc79e688b45f4832cda0"} Feb 19 21:17:27 crc kubenswrapper[4886]: I0219 21:17:27.462921 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wxjwn" event={"ID":"16e0b754-3cd3-433d-80c6-11363689e9c3","Type":"ContainerStarted","Data":"80a0d83fb9e6e00e6c03d427d1a387b3584d46fdbdfc3b5de4fd6cc857c3dc17"} Feb 19 21:17:27 crc kubenswrapper[4886]: I0219 21:17:27.463129 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wxjwn" Feb 19 21:17:27 crc kubenswrapper[4886]: I0219 21:17:27.464785 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-2cpst" event={"ID":"fe935d54-8e74-4df9-a450-19df5d20b568","Type":"ContainerStarted","Data":"a03290c2214ba04329b6a7c15a737cef23de21827fa862693a50dd117be5db18"} Feb 19 21:17:27 crc kubenswrapper[4886]: I0219 21:17:27.464879 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-2cpst" Feb 19 21:17:27 crc kubenswrapper[4886]: E0219 21:17:27.466058 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.17:5001/openstack-k8s-operators/telemetry-operator:b83e41b73854b29b5d6860a1d9e9ac7611640781\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5484b6858b-lrr24" podUID="67db5487-865b-4ce2-8ade-a87f6909b85d" Feb 19 21:17:27 crc kubenswrapper[4886]: I0219 21:17:27.471411 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2fmz6" podStartSLOduration=3.783716744 podStartE2EDuration="35.471389792s" podCreationTimestamp="2026-02-19 21:16:52 +0000 UTC" firstStartedPulling="2026-02-19 21:16:54.793437569 +0000 UTC m=+1045.421280619" lastFinishedPulling="2026-02-19 21:17:26.481110607 +0000 UTC m=+1077.108953667" observedRunningTime="2026-02-19 21:17:27.465501043 +0000 UTC m=+1078.093344093" watchObservedRunningTime="2026-02-19 21:17:27.471389792 +0000 UTC m=+1078.099232832" Feb 19 21:17:27 crc kubenswrapper[4886]: I0219 21:17:27.487328 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wxjwn" podStartSLOduration=4.01398053 podStartE2EDuration="35.487308065s" podCreationTimestamp="2026-02-19 21:16:52 +0000 UTC" firstStartedPulling="2026-02-19 21:16:55.007795113 +0000 UTC m=+1045.635638153" lastFinishedPulling="2026-02-19 21:17:26.481122618 +0000 UTC m=+1077.108965688" observedRunningTime="2026-02-19 21:17:27.484290678 +0000 UTC m=+1078.112133728" watchObservedRunningTime="2026-02-19 21:17:27.487308065 +0000 UTC m=+1078.115151115" Feb 19 21:17:27 crc kubenswrapper[4886]: I0219 21:17:27.521114 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-2cpst" podStartSLOduration=3.736721845 podStartE2EDuration="35.52109283s" podCreationTimestamp="2026-02-19 21:16:52 +0000 UTC" firstStartedPulling="2026-02-19 21:16:54.696952968 +0000 UTC m=+1045.324796018" lastFinishedPulling="2026-02-19 21:17:26.481323933 +0000 UTC m=+1077.109167003" observedRunningTime="2026-02-19 21:17:27.503838843 +0000 UTC m=+1078.131681893" watchObservedRunningTime="2026-02-19 21:17:27.52109283 +0000 UTC m=+1078.148935880" Feb 19 21:17:27 crc kubenswrapper[4886]: I0219 21:17:27.542380 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qd57q" podStartSLOduration=3.703903796 podStartE2EDuration="35.542359458s" podCreationTimestamp="2026-02-19 21:16:52 +0000 UTC" firstStartedPulling="2026-02-19 21:16:54.735461333 +0000 UTC m=+1045.363304383" lastFinishedPulling="2026-02-19 21:17:26.573916995 +0000 UTC m=+1077.201760045" observedRunningTime="2026-02-19 21:17:27.538465289 +0000 UTC m=+1078.166308339" watchObservedRunningTime="2026-02-19 21:17:27.542359458 +0000 UTC m=+1078.170202508" Feb 19 21:17:27 crc kubenswrapper[4886]: I0219 21:17:27.568864 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-znrbn" podStartSLOduration=4.386623878 podStartE2EDuration="35.568849868s" podCreationTimestamp="2026-02-19 21:16:52 +0000 UTC" firstStartedPulling="2026-02-19 21:16:55.373609098 +0000 UTC m=+1046.001452148" lastFinishedPulling="2026-02-19 21:17:26.555835048 +0000 UTC m=+1077.183678138" observedRunningTime="2026-02-19 21:17:27.566534049 +0000 UTC m=+1078.194377099" watchObservedRunningTime="2026-02-19 21:17:27.568849868 +0000 UTC m=+1078.196692918" Feb 19 21:17:27 crc kubenswrapper[4886]: I0219 21:17:27.583000 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-dm5th" podStartSLOduration=4.290376612 podStartE2EDuration="35.582981575s" podCreationTimestamp="2026-02-19 21:16:52 +0000 UTC" firstStartedPulling="2026-02-19 21:16:53.815804575 +0000 UTC m=+1044.443647625" lastFinishedPulling="2026-02-19 21:17:25.108409538 +0000 UTC m=+1075.736252588" observedRunningTime="2026-02-19 21:17:27.578965604 +0000 UTC m=+1078.206808654" watchObservedRunningTime="2026-02-19 21:17:27.582981575 +0000 UTC m=+1078.210824625" Feb 19 21:17:28 crc kubenswrapper[4886]: I0219 21:17:28.473255 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ggvsr" event={"ID":"d5e2840a-8a17-4ddd-92e5-d033222d3dee","Type":"ContainerStarted","Data":"1ff000ecb183402887bddf882c80e0913b7dcf89720c6e5debcfb6b32aa2051e"} Feb 19 21:17:28 crc kubenswrapper[4886]: I0219 21:17:28.473699 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ggvsr" Feb 19 21:17:28 crc kubenswrapper[4886]: I0219 21:17:28.475959 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" event={"ID":"997a5ddf-b07d-45c0-a843-a833e93596da","Type":"ContainerStarted","Data":"cf3821c89ec69de25ba5b5e771e8812a338317c8094df63cd28a9158ecffabb6"} Feb 19 21:17:28 crc kubenswrapper[4886]: I0219 21:17:28.476099 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" Feb 19 21:17:28 crc kubenswrapper[4886]: I0219 21:17:28.482746 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-mvk6x" event={"ID":"b6a90270-aa6d-4792-96bc-333bff7f15df","Type":"ContainerStarted","Data":"72ee7e5495893540bbd6eed7462858c5145832ead4d93f2dc323bf29141ee57e"} Feb 19 21:17:28 crc kubenswrapper[4886]: I0219 21:17:28.483008 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-mvk6x" Feb 19 21:17:28 crc kubenswrapper[4886]: I0219 21:17:28.484628 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-s5rp2" event={"ID":"56992c82-2769-4a27-ac41-864dda46aa88","Type":"ContainerStarted","Data":"412bfe76c1e3fb1f0ded4cb00166ece1c9fe697e118a1262159383f3244362bd"} Feb 19 21:17:28 crc kubenswrapper[4886]: I0219 21:17:28.536462 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ggvsr" podStartSLOduration=2.827650345 podStartE2EDuration="36.536445868s" podCreationTimestamp="2026-02-19 21:16:52 +0000 UTC" firstStartedPulling="2026-02-19 21:16:53.581337943 +0000 UTC m=+1044.209180993" lastFinishedPulling="2026-02-19 21:17:27.290133466 +0000 UTC m=+1077.917976516" observedRunningTime="2026-02-19 21:17:28.500838888 +0000 UTC m=+1079.128681958" watchObservedRunningTime="2026-02-19 21:17:28.536445868 +0000 UTC m=+1079.164288918" Feb 19 21:17:28 crc kubenswrapper[4886]: I0219 21:17:28.540991 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" podStartSLOduration=35.540972563 podStartE2EDuration="35.540972563s" podCreationTimestamp="2026-02-19 21:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:17:28.53294318 +0000 UTC m=+1079.160786230" watchObservedRunningTime="2026-02-19 21:17:28.540972563 +0000 UTC m=+1079.168815613" Feb 19 21:17:28 crc kubenswrapper[4886]: I0219 21:17:28.555332 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-mvk6x" podStartSLOduration=3.482277868 podStartE2EDuration="36.555316176s" podCreationTimestamp="2026-02-19 21:16:52 +0000 UTC" firstStartedPulling="2026-02-19 21:16:54.216040271 +0000 UTC m=+1044.843883321" lastFinishedPulling="2026-02-19 21:17:27.289078579 +0000 UTC m=+1077.916921629" observedRunningTime="2026-02-19 21:17:28.550388341 +0000 UTC m=+1079.178231401" watchObservedRunningTime="2026-02-19 21:17:28.555316176 +0000 UTC m=+1079.183159226" Feb 19 21:17:28 crc kubenswrapper[4886]: I0219 21:17:28.561933 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-s5rp2" podStartSLOduration=4.067782201 podStartE2EDuration="36.561919343s" podCreationTimestamp="2026-02-19 21:16:52 +0000 UTC" firstStartedPulling="2026-02-19 21:16:54.796167668 +0000 UTC m=+1045.424010718" lastFinishedPulling="2026-02-19 21:17:27.2903048 +0000 UTC m=+1077.918147860" observedRunningTime="2026-02-19 21:17:28.560305662 +0000 UTC m=+1079.188148712" watchObservedRunningTime="2026-02-19 21:17:28.561919343 +0000 UTC m=+1079.189762393" Feb 19 21:17:32 crc kubenswrapper[4886]: I0219 21:17:32.736623 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ggvsr" Feb 19 21:17:32 crc kubenswrapper[4886]: I0219 21:17:32.832519 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-dm5th" Feb 19 21:17:32 crc kubenswrapper[4886]: I0219 21:17:32.978504 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-mvk6x" Feb 19 21:17:33 crc kubenswrapper[4886]: I0219 21:17:33.139910 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qd57q" Feb 19 21:17:33 crc kubenswrapper[4886]: I0219 21:17:33.153446 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-s5rp2" Feb 19 21:17:33 crc kubenswrapper[4886]: I0219 21:17:33.156865 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-s5rp2" Feb 19 21:17:33 crc kubenswrapper[4886]: I0219 21:17:33.166007 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2fmz6" Feb 19 21:17:33 crc kubenswrapper[4886]: I0219 21:17:33.246664 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-2cpst" Feb 19 21:17:33 crc kubenswrapper[4886]: I0219 21:17:33.459377 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wxjwn" Feb 19 21:17:33 crc kubenswrapper[4886]: I0219 21:17:33.688240 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-znrbn" Feb 19 21:17:34 crc kubenswrapper[4886]: I0219 21:17:34.549979 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qtwh6" event={"ID":"4906db46-96ab-4aac-8aac-ba1532087aa2","Type":"ContainerStarted","Data":"9a851fb159a2a7b4088f50b80a03d44671b7134360dcb85b595564ea11473f4e"} Feb 19 21:17:34 crc kubenswrapper[4886]: I0219 21:17:34.551225 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qtwh6" Feb 19 21:17:34 crc kubenswrapper[4886]: I0219 21:17:34.554277 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-7c2qx" event={"ID":"6554dbf1-0f46-434a-902d-9aa5bbd055d8","Type":"ContainerStarted","Data":"94730c406a9e565271fe17e518105b63c04e60cd40d5303fac597fa7963deeb5"} Feb 19 21:17:34 crc kubenswrapper[4886]: I0219 21:17:34.554629 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-7c2qx" Feb 19 21:17:34 crc kubenswrapper[4886]: I0219 21:17:34.556504 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-l2kk9" event={"ID":"17769417-6658-4daf-8268-e92194198b5c","Type":"ContainerStarted","Data":"44eac39033282461fcf0eb388f8593733cadbeb2b4fb45a375576913fdf818ed"} Feb 19 21:17:34 crc kubenswrapper[4886]: I0219 21:17:34.556713 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-l2kk9" Feb 19 21:17:34 crc kubenswrapper[4886]: I0219 21:17:34.570878 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dcg9l" event={"ID":"06cb83ff-29f5-438f-87b0-32bb5899552d","Type":"ContainerStarted","Data":"b83fe3384e82619198f3d4f26786c40344d785b749a20d70c72d181e3eff643b"} Feb 19 21:17:34 crc kubenswrapper[4886]: I0219 21:17:34.571056 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dcg9l" Feb 19 21:17:34 crc kubenswrapper[4886]: I0219 21:17:34.573906 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-6d62b" event={"ID":"ef72a766-0d85-430a-ab0d-f0eda86f582f","Type":"ContainerStarted","Data":"f0bd613f315a31c1a25f763ed204f0a8634421b9919f3c1ff6d95240d80e1ae6"} Feb 19 21:17:34 crc kubenswrapper[4886]: I0219 21:17:34.574082 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-6d62b" Feb 19 21:17:34 crc kubenswrapper[4886]: I0219 21:17:34.589048 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qtwh6" podStartSLOduration=2.757560401 podStartE2EDuration="42.58903145s" podCreationTimestamp="2026-02-19 21:16:52 +0000 UTC" firstStartedPulling="2026-02-19 21:16:54.219224521 +0000 UTC m=+1044.847067571" lastFinishedPulling="2026-02-19 21:17:34.05069557 +0000 UTC m=+1084.678538620" observedRunningTime="2026-02-19 21:17:34.570192344 +0000 UTC m=+1085.198035394" watchObservedRunningTime="2026-02-19 21:17:34.58903145 +0000 UTC m=+1085.216874500" Feb 19 21:17:34 crc kubenswrapper[4886]: I0219 21:17:34.601977 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-7c2qx" podStartSLOduration=3.598709654 podStartE2EDuration="42.601962178s" podCreationTimestamp="2026-02-19 21:16:52 +0000 UTC" firstStartedPulling="2026-02-19 21:16:55.047646231 +0000 UTC m=+1045.675489281" lastFinishedPulling="2026-02-19 21:17:34.050898745 +0000 UTC m=+1084.678741805" observedRunningTime="2026-02-19 21:17:34.600802328 +0000 UTC m=+1085.228645378" watchObservedRunningTime="2026-02-19 21:17:34.601962178 +0000 UTC m=+1085.229805218" Feb 19 21:17:34 crc kubenswrapper[4886]: I0219 21:17:34.602433 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-l2kk9" podStartSLOduration=3.432760925 podStartE2EDuration="42.602428729s" podCreationTimestamp="2026-02-19 21:16:52 +0000 UTC" firstStartedPulling="2026-02-19 21:16:55.023919571 +0000 UTC m=+1045.651762621" lastFinishedPulling="2026-02-19 21:17:34.193587335 +0000 UTC m=+1084.821430425" observedRunningTime="2026-02-19 21:17:34.586995189 +0000 UTC m=+1085.214838239" watchObservedRunningTime="2026-02-19 21:17:34.602428729 +0000 UTC m=+1085.230271769" Feb 19 21:17:34 crc kubenswrapper[4886]: I0219 21:17:34.623199 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dcg9l" podStartSLOduration=2.454801741 podStartE2EDuration="42.623185994s" podCreationTimestamp="2026-02-19 21:16:52 +0000 UTC" firstStartedPulling="2026-02-19 21:16:54.011460045 +0000 UTC m=+1044.639303095" lastFinishedPulling="2026-02-19 21:17:34.179844248 +0000 UTC m=+1084.807687348" observedRunningTime="2026-02-19 21:17:34.620598559 +0000 UTC m=+1085.248441609" watchObservedRunningTime="2026-02-19 21:17:34.623185994 +0000 UTC m=+1085.251029044" Feb 19 21:17:34 crc kubenswrapper[4886]: I0219 21:17:34.648244 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-6d62b" podStartSLOduration=35.622187717 podStartE2EDuration="42.648221958s" podCreationTimestamp="2026-02-19 21:16:52 +0000 UTC" firstStartedPulling="2026-02-19 21:17:27.164432766 +0000 UTC m=+1077.792275816" lastFinishedPulling="2026-02-19 21:17:34.190466996 +0000 UTC m=+1084.818310057" observedRunningTime="2026-02-19 21:17:34.644611327 +0000 UTC m=+1085.272454377" watchObservedRunningTime="2026-02-19 21:17:34.648221958 +0000 UTC m=+1085.276065008" Feb 19 21:17:35 crc kubenswrapper[4886]: I0219 21:17:35.527975 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" Feb 19 21:17:35 crc kubenswrapper[4886]: I0219 21:17:35.628564 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" event={"ID":"dbea0765-f1be-4f22-a192-686a73112963","Type":"ContainerStarted","Data":"fe5ddc33b66e4cf895c0165b1f918749cf05ba1a2818bcf6fe0d03d3e9d6e046"} Feb 19 21:17:35 crc kubenswrapper[4886]: I0219 21:17:35.685392 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" podStartSLOduration=36.613647662 podStartE2EDuration="43.685373458s" podCreationTimestamp="2026-02-19 21:16:52 +0000 UTC" firstStartedPulling="2026-02-19 21:17:27.164421376 +0000 UTC m=+1077.792264426" lastFinishedPulling="2026-02-19 21:17:34.236147132 +0000 UTC m=+1084.863990222" observedRunningTime="2026-02-19 21:17:35.673535009 +0000 UTC m=+1086.301378059" watchObservedRunningTime="2026-02-19 21:17:35.685373458 +0000 UTC m=+1086.313216508" Feb 19 21:17:36 crc kubenswrapper[4886]: I0219 21:17:36.620443 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-znmcl" event={"ID":"7d34c65a-18e8-4709-856b-232ceae77630","Type":"ContainerStarted","Data":"5cac61a4eadecbb94d8af04f897c727b83b2b7fa7c78f626be3c1a58546672ed"} Feb 19 21:17:36 crc kubenswrapper[4886]: I0219 21:17:36.621249 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" Feb 19 21:17:36 crc kubenswrapper[4886]: I0219 21:17:36.621515 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-znmcl" Feb 19 21:17:36 crc kubenswrapper[4886]: I0219 21:17:36.648005 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-znmcl" podStartSLOduration=3.508367207 podStartE2EDuration="44.647986082s" podCreationTimestamp="2026-02-19 21:16:52 +0000 UTC" firstStartedPulling="2026-02-19 21:16:55.024328021 +0000 UTC m=+1045.652171071" lastFinishedPulling="2026-02-19 21:17:36.163946896 +0000 UTC m=+1086.791789946" observedRunningTime="2026-02-19 21:17:36.642864012 +0000 UTC m=+1087.270707132" watchObservedRunningTime="2026-02-19 21:17:36.647986082 +0000 UTC m=+1087.275829132" Feb 19 21:17:38 crc kubenswrapper[4886]: I0219 21:17:38.650003 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-l59mx" event={"ID":"3d51e481-ad0d-4d45-b0ee-7ce02b1c428d","Type":"ContainerStarted","Data":"de977314e77a83f55494bedf60ffdc05cfd43102e27304c1afd7089235c04107"} Feb 19 21:17:38 crc kubenswrapper[4886]: I0219 21:17:38.651567 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-l59mx" Feb 19 21:17:38 crc kubenswrapper[4886]: I0219 21:17:38.714724 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-l59mx" podStartSLOduration=3.322418921 podStartE2EDuration="46.7147068s" podCreationTimestamp="2026-02-19 21:16:52 +0000 UTC" firstStartedPulling="2026-02-19 21:16:54.230012294 +0000 UTC m=+1044.857855344" lastFinishedPulling="2026-02-19 21:17:37.622300143 +0000 UTC m=+1088.250143223" observedRunningTime="2026-02-19 21:17:38.708101353 +0000 UTC m=+1089.335944423" watchObservedRunningTime="2026-02-19 21:17:38.7147068 +0000 UTC m=+1089.342549850" Feb 19 21:17:39 crc kubenswrapper[4886]: I0219 21:17:39.351407 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" Feb 19 21:17:39 crc kubenswrapper[4886]: I0219 21:17:39.659705 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kpd49" event={"ID":"0ce0ddb2-feaa-4719-8674-ba83fcee0a98","Type":"ContainerStarted","Data":"0323be810b9ea3dffa0850b78ae568beddb703524b43fa02749da160a349b6b4"} Feb 19 21:17:39 crc kubenswrapper[4886]: I0219 21:17:39.662309 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-r2k2r" event={"ID":"a87f6938-30e5-4481-ba31-246084feaa8a","Type":"ContainerStarted","Data":"7b5b944de8ccf50853d6ae8eae3e4b5f4dd37a11425b43603065a050046a1886"} Feb 19 21:17:39 crc kubenswrapper[4886]: I0219 21:17:39.663815 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-nj4ct" event={"ID":"5c4a962c-02bb-48ff-9444-db393b42a9b0","Type":"ContainerStarted","Data":"0783c1bd9f7d56f5b70fd9091df34b1647a1816b315badb95f6e9062a11fab7d"} Feb 19 21:17:39 crc kubenswrapper[4886]: I0219 21:17:39.663960 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-nj4ct" Feb 19 21:17:39 crc kubenswrapper[4886]: I0219 21:17:39.666080 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5484b6858b-lrr24" event={"ID":"67db5487-865b-4ce2-8ade-a87f6909b85d","Type":"ContainerStarted","Data":"cc053692d7e6f80017f779bdbeb539e59eaf70b0bfab47992d879c29d71361c1"} Feb 19 21:17:39 crc kubenswrapper[4886]: I0219 21:17:39.666421 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5484b6858b-lrr24" Feb 19 21:17:39 crc kubenswrapper[4886]: I0219 21:17:39.686735 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-nj4ct" podStartSLOduration=3.3897632460000002 podStartE2EDuration="47.686713562s" podCreationTimestamp="2026-02-19 21:16:52 +0000 UTC" firstStartedPulling="2026-02-19 21:16:54.772746766 +0000 UTC m=+1045.400589806" lastFinishedPulling="2026-02-19 21:17:39.069697052 +0000 UTC m=+1089.697540122" observedRunningTime="2026-02-19 21:17:39.681400598 +0000 UTC m=+1090.309243658" watchObservedRunningTime="2026-02-19 21:17:39.686713562 +0000 UTC m=+1090.314556612" Feb 19 21:17:39 crc kubenswrapper[4886]: I0219 21:17:39.705778 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5484b6858b-lrr24" podStartSLOduration=4.079039784 podStartE2EDuration="47.705753963s" podCreationTimestamp="2026-02-19 21:16:52 +0000 UTC" firstStartedPulling="2026-02-19 21:16:55.065284247 +0000 UTC m=+1045.693127297" lastFinishedPulling="2026-02-19 21:17:38.691998386 +0000 UTC m=+1089.319841476" observedRunningTime="2026-02-19 21:17:39.698670754 +0000 UTC m=+1090.326513804" watchObservedRunningTime="2026-02-19 21:17:39.705753963 +0000 UTC m=+1090.333597013" Feb 19 21:17:40 crc kubenswrapper[4886]: I0219 21:17:40.693801 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kpd49" podStartSLOduration=3.975638369 podStartE2EDuration="47.693779151s" podCreationTimestamp="2026-02-19 21:16:53 +0000 UTC" firstStartedPulling="2026-02-19 21:16:55.34880519 +0000 UTC m=+1045.976648240" lastFinishedPulling="2026-02-19 21:17:39.066945962 +0000 UTC m=+1089.694789022" observedRunningTime="2026-02-19 21:17:40.691407951 +0000 UTC m=+1091.319251041" watchObservedRunningTime="2026-02-19 21:17:40.693779151 +0000 UTC m=+1091.321622241" Feb 19 21:17:40 crc kubenswrapper[4886]: I0219 21:17:40.722947 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-r2k2r" podStartSLOduration=3.090617357 podStartE2EDuration="48.722932159s" podCreationTimestamp="2026-02-19 21:16:52 +0000 UTC" firstStartedPulling="2026-02-19 21:16:53.435595985 +0000 UTC m=+1044.063439035" lastFinishedPulling="2026-02-19 21:17:39.067910767 +0000 UTC m=+1089.695753837" observedRunningTime="2026-02-19 21:17:40.715307886 +0000 UTC m=+1091.343150936" watchObservedRunningTime="2026-02-19 21:17:40.722932159 +0000 UTC m=+1091.350775209" Feb 19 21:17:42 crc kubenswrapper[4886]: I0219 21:17:42.698786 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6kxv6" event={"ID":"14b2ecba-fa5a-41f5-90d5-5085e30e277e","Type":"ContainerStarted","Data":"d0091cdd434d421156d7820510ff22edd189baccf1a731634adf833f3c2abed0"} Feb 19 21:17:42 crc kubenswrapper[4886]: I0219 21:17:42.718639 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-r2k2r" Feb 19 21:17:42 crc kubenswrapper[4886]: I0219 21:17:42.805808 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dcg9l" Feb 19 21:17:42 crc kubenswrapper[4886]: I0219 21:17:42.974871 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qtwh6" Feb 19 21:17:43 crc kubenswrapper[4886]: I0219 21:17:43.096645 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-l59mx" Feb 19 21:17:43 crc kubenswrapper[4886]: I0219 21:17:43.361417 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-7c2qx" Feb 19 21:17:43 crc kubenswrapper[4886]: I0219 21:17:43.439332 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-l2kk9" Feb 19 21:17:43 crc kubenswrapper[4886]: I0219 21:17:43.544706 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5484b6858b-lrr24" Feb 19 21:17:43 crc kubenswrapper[4886]: I0219 21:17:43.676040 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-znmcl" Feb 19 21:17:43 crc kubenswrapper[4886]: I0219 21:17:43.709207 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6kxv6" Feb 19 21:17:43 crc kubenswrapper[4886]: I0219 21:17:43.744570 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6kxv6" podStartSLOduration=4.182412099 podStartE2EDuration="51.744549656s" podCreationTimestamp="2026-02-19 21:16:52 +0000 UTC" firstStartedPulling="2026-02-19 21:16:54.796362663 +0000 UTC m=+1045.424205713" lastFinishedPulling="2026-02-19 21:17:42.35850019 +0000 UTC m=+1092.986343270" observedRunningTime="2026-02-19 21:17:43.735665122 +0000 UTC m=+1094.363508232" watchObservedRunningTime="2026-02-19 21:17:43.744549656 +0000 UTC m=+1094.372392716" Feb 19 21:17:48 crc kubenswrapper[4886]: I0219 21:17:48.325103 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:17:48 crc kubenswrapper[4886]: I0219 21:17:48.325646 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:17:48 crc kubenswrapper[4886]: I0219 21:17:48.637471 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-6d62b" Feb 19 21:17:52 crc kubenswrapper[4886]: I0219 21:17:52.720892 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-r2k2r" Feb 19 21:17:53 crc kubenswrapper[4886]: I0219 21:17:53.136111 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6kxv6" Feb 19 21:17:53 crc kubenswrapper[4886]: I0219 21:17:53.218600 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-nj4ct" Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.626192 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-2kztz"] Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.629182 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-2kztz" Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.638863 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.638899 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.638953 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-z92dz" Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.640647 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.645841 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-2kztz"] Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.682174 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jchj\" (UniqueName: \"kubernetes.io/projected/6862d7c6-a605-46f1-a2dc-9cef73d11b4c-kube-api-access-9jchj\") pod \"dnsmasq-dns-675f4bcbfc-2kztz\" (UID: \"6862d7c6-a605-46f1-a2dc-9cef73d11b4c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-2kztz" Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.682221 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6862d7c6-a605-46f1-a2dc-9cef73d11b4c-config\") pod \"dnsmasq-dns-675f4bcbfc-2kztz\" (UID: \"6862d7c6-a605-46f1-a2dc-9cef73d11b4c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-2kztz" Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.684456 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7t6rp"] Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.696625 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7t6rp" Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.701318 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.701986 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7t6rp"] Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.784369 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb89s\" (UniqueName: \"kubernetes.io/projected/68233b32-e4f1-46e6-ad6e-e6c271e2e02f-kube-api-access-gb89s\") pod \"dnsmasq-dns-78dd6ddcc-7t6rp\" (UID: \"68233b32-e4f1-46e6-ad6e-e6c271e2e02f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7t6rp" Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.784489 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jchj\" (UniqueName: \"kubernetes.io/projected/6862d7c6-a605-46f1-a2dc-9cef73d11b4c-kube-api-access-9jchj\") pod \"dnsmasq-dns-675f4bcbfc-2kztz\" (UID: \"6862d7c6-a605-46f1-a2dc-9cef73d11b4c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-2kztz" Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.784518 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6862d7c6-a605-46f1-a2dc-9cef73d11b4c-config\") pod \"dnsmasq-dns-675f4bcbfc-2kztz\" (UID: \"6862d7c6-a605-46f1-a2dc-9cef73d11b4c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-2kztz" Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.784574 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68233b32-e4f1-46e6-ad6e-e6c271e2e02f-config\") pod \"dnsmasq-dns-78dd6ddcc-7t6rp\" (UID: \"68233b32-e4f1-46e6-ad6e-e6c271e2e02f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7t6rp" Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.784601 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68233b32-e4f1-46e6-ad6e-e6c271e2e02f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7t6rp\" (UID: \"68233b32-e4f1-46e6-ad6e-e6c271e2e02f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7t6rp" Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.785373 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6862d7c6-a605-46f1-a2dc-9cef73d11b4c-config\") pod \"dnsmasq-dns-675f4bcbfc-2kztz\" (UID: \"6862d7c6-a605-46f1-a2dc-9cef73d11b4c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-2kztz" Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.808622 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jchj\" (UniqueName: \"kubernetes.io/projected/6862d7c6-a605-46f1-a2dc-9cef73d11b4c-kube-api-access-9jchj\") pod \"dnsmasq-dns-675f4bcbfc-2kztz\" (UID: \"6862d7c6-a605-46f1-a2dc-9cef73d11b4c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-2kztz" Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.886744 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68233b32-e4f1-46e6-ad6e-e6c271e2e02f-config\") pod \"dnsmasq-dns-78dd6ddcc-7t6rp\" (UID: \"68233b32-e4f1-46e6-ad6e-e6c271e2e02f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7t6rp" Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.886838 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68233b32-e4f1-46e6-ad6e-e6c271e2e02f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7t6rp\" (UID: \"68233b32-e4f1-46e6-ad6e-e6c271e2e02f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7t6rp" Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.887017 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb89s\" (UniqueName: \"kubernetes.io/projected/68233b32-e4f1-46e6-ad6e-e6c271e2e02f-kube-api-access-gb89s\") pod \"dnsmasq-dns-78dd6ddcc-7t6rp\" (UID: \"68233b32-e4f1-46e6-ad6e-e6c271e2e02f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7t6rp" Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.888794 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68233b32-e4f1-46e6-ad6e-e6c271e2e02f-config\") pod \"dnsmasq-dns-78dd6ddcc-7t6rp\" (UID: \"68233b32-e4f1-46e6-ad6e-e6c271e2e02f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7t6rp" Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.888863 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68233b32-e4f1-46e6-ad6e-e6c271e2e02f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7t6rp\" (UID: \"68233b32-e4f1-46e6-ad6e-e6c271e2e02f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7t6rp" Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.907705 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb89s\" (UniqueName: \"kubernetes.io/projected/68233b32-e4f1-46e6-ad6e-e6c271e2e02f-kube-api-access-gb89s\") pod \"dnsmasq-dns-78dd6ddcc-7t6rp\" (UID: \"68233b32-e4f1-46e6-ad6e-e6c271e2e02f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7t6rp" Feb 19 21:18:13 crc kubenswrapper[4886]: I0219 21:18:13.962255 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-2kztz" Feb 19 21:18:14 crc kubenswrapper[4886]: I0219 21:18:14.021398 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7t6rp" Feb 19 21:18:14 crc kubenswrapper[4886]: I0219 21:18:14.434760 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-2kztz"] Feb 19 21:18:14 crc kubenswrapper[4886]: I0219 21:18:14.599702 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7t6rp"] Feb 19 21:18:14 crc kubenswrapper[4886]: W0219 21:18:14.604894 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68233b32_e4f1_46e6_ad6e_e6c271e2e02f.slice/crio-01ffed43176aec27d87e63acee20c7826d61cc0518984547c216f48a35686d1a WatchSource:0}: Error finding container 01ffed43176aec27d87e63acee20c7826d61cc0518984547c216f48a35686d1a: Status 404 returned error can't find the container with id 01ffed43176aec27d87e63acee20c7826d61cc0518984547c216f48a35686d1a Feb 19 21:18:15 crc kubenswrapper[4886]: I0219 21:18:15.038734 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-2kztz" event={"ID":"6862d7c6-a605-46f1-a2dc-9cef73d11b4c","Type":"ContainerStarted","Data":"4c9b00f99c55aca381af9ae43c9b0fb2ba2f714cdbe2ba2f141e5e1769ac2766"} Feb 19 21:18:15 crc kubenswrapper[4886]: I0219 21:18:15.040844 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7t6rp" event={"ID":"68233b32-e4f1-46e6-ad6e-e6c271e2e02f","Type":"ContainerStarted","Data":"01ffed43176aec27d87e63acee20c7826d61cc0518984547c216f48a35686d1a"} Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.418043 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-2kztz"] Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.452794 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-j88cw"] Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.454294 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-j88cw" Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.482933 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-j88cw"] Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.557154 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnxgm\" (UniqueName: \"kubernetes.io/projected/8902da83-187f-4cc3-8486-3d7ed6625bdb-kube-api-access-qnxgm\") pod \"dnsmasq-dns-666b6646f7-j88cw\" (UID: \"8902da83-187f-4cc3-8486-3d7ed6625bdb\") " pod="openstack/dnsmasq-dns-666b6646f7-j88cw" Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.557238 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8902da83-187f-4cc3-8486-3d7ed6625bdb-config\") pod \"dnsmasq-dns-666b6646f7-j88cw\" (UID: \"8902da83-187f-4cc3-8486-3d7ed6625bdb\") " pod="openstack/dnsmasq-dns-666b6646f7-j88cw" Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.557283 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8902da83-187f-4cc3-8486-3d7ed6625bdb-dns-svc\") pod \"dnsmasq-dns-666b6646f7-j88cw\" (UID: \"8902da83-187f-4cc3-8486-3d7ed6625bdb\") " pod="openstack/dnsmasq-dns-666b6646f7-j88cw" Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.659867 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8902da83-187f-4cc3-8486-3d7ed6625bdb-config\") pod \"dnsmasq-dns-666b6646f7-j88cw\" (UID: \"8902da83-187f-4cc3-8486-3d7ed6625bdb\") " pod="openstack/dnsmasq-dns-666b6646f7-j88cw" Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.659943 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8902da83-187f-4cc3-8486-3d7ed6625bdb-dns-svc\") pod \"dnsmasq-dns-666b6646f7-j88cw\" (UID: \"8902da83-187f-4cc3-8486-3d7ed6625bdb\") " pod="openstack/dnsmasq-dns-666b6646f7-j88cw" Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.660078 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnxgm\" (UniqueName: \"kubernetes.io/projected/8902da83-187f-4cc3-8486-3d7ed6625bdb-kube-api-access-qnxgm\") pod \"dnsmasq-dns-666b6646f7-j88cw\" (UID: \"8902da83-187f-4cc3-8486-3d7ed6625bdb\") " pod="openstack/dnsmasq-dns-666b6646f7-j88cw" Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.660987 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8902da83-187f-4cc3-8486-3d7ed6625bdb-dns-svc\") pod \"dnsmasq-dns-666b6646f7-j88cw\" (UID: \"8902da83-187f-4cc3-8486-3d7ed6625bdb\") " pod="openstack/dnsmasq-dns-666b6646f7-j88cw" Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.661064 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8902da83-187f-4cc3-8486-3d7ed6625bdb-config\") pod \"dnsmasq-dns-666b6646f7-j88cw\" (UID: \"8902da83-187f-4cc3-8486-3d7ed6625bdb\") " pod="openstack/dnsmasq-dns-666b6646f7-j88cw" Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.707498 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnxgm\" (UniqueName: \"kubernetes.io/projected/8902da83-187f-4cc3-8486-3d7ed6625bdb-kube-api-access-qnxgm\") pod \"dnsmasq-dns-666b6646f7-j88cw\" (UID: \"8902da83-187f-4cc3-8486-3d7ed6625bdb\") " pod="openstack/dnsmasq-dns-666b6646f7-j88cw" Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.767230 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7t6rp"] Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.779155 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-j88cw" Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.783713 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qlf2w"] Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.785231 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qlf2w" Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.806885 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qlf2w"] Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.864515 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81-config\") pod \"dnsmasq-dns-57d769cc4f-qlf2w\" (UID: \"d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81\") " pod="openstack/dnsmasq-dns-57d769cc4f-qlf2w" Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.865019 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-qlf2w\" (UID: \"d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81\") " pod="openstack/dnsmasq-dns-57d769cc4f-qlf2w" Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.865054 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljznl\" (UniqueName: \"kubernetes.io/projected/d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81-kube-api-access-ljznl\") pod \"dnsmasq-dns-57d769cc4f-qlf2w\" (UID: \"d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81\") " pod="openstack/dnsmasq-dns-57d769cc4f-qlf2w" Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.970751 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81-config\") pod \"dnsmasq-dns-57d769cc4f-qlf2w\" (UID: \"d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81\") " pod="openstack/dnsmasq-dns-57d769cc4f-qlf2w" Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.970899 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-qlf2w\" (UID: \"d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81\") " pod="openstack/dnsmasq-dns-57d769cc4f-qlf2w" Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.970928 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljznl\" (UniqueName: \"kubernetes.io/projected/d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81-kube-api-access-ljznl\") pod \"dnsmasq-dns-57d769cc4f-qlf2w\" (UID: \"d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81\") " pod="openstack/dnsmasq-dns-57d769cc4f-qlf2w" Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.974504 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81-config\") pod \"dnsmasq-dns-57d769cc4f-qlf2w\" (UID: \"d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81\") " pod="openstack/dnsmasq-dns-57d769cc4f-qlf2w" Feb 19 21:18:16 crc kubenswrapper[4886]: I0219 21:18:16.975843 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-qlf2w\" (UID: \"d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81\") " pod="openstack/dnsmasq-dns-57d769cc4f-qlf2w" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.011401 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljznl\" (UniqueName: \"kubernetes.io/projected/d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81-kube-api-access-ljznl\") pod \"dnsmasq-dns-57d769cc4f-qlf2w\" (UID: \"d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81\") " pod="openstack/dnsmasq-dns-57d769cc4f-qlf2w" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.193944 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qlf2w" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.446541 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-j88cw"] Feb 19 21:18:17 crc kubenswrapper[4886]: W0219 21:18:17.450904 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8902da83_187f_4cc3_8486_3d7ed6625bdb.slice/crio-e89819363709fab0722953ee2ce0e209e45d78d9afe23a63b2375b54167db79f WatchSource:0}: Error finding container e89819363709fab0722953ee2ce0e209e45d78d9afe23a63b2375b54167db79f: Status 404 returned error can't find the container with id e89819363709fab0722953ee2ce0e209e45d78d9afe23a63b2375b54167db79f Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.575433 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.578891 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.583050 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.583368 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.583629 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.583821 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.584779 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.585159 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-j7kkn" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.585855 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.587869 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.656117 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.658125 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.670990 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.673300 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.691502 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.704157 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f1ec4082-af5d-46ce-a7ca-88091e668a22-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.704194 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f1ec4082-af5d-46ce-a7ca-88091e668a22-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.704220 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f1ec4082-af5d-46ce-a7ca-88091e668a22-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.704248 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hftz2\" (UniqueName: \"kubernetes.io/projected/f1ec4082-af5d-46ce-a7ca-88091e668a22-kube-api-access-hftz2\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.704331 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f1ec4082-af5d-46ce-a7ca-88091e668a22-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.704360 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1ec4082-af5d-46ce-a7ca-88091e668a22-config-data\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.704454 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f1ec4082-af5d-46ce-a7ca-88091e668a22-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.704511 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1daff07d-3864-4736-a71a-3bbc999db29d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1daff07d-3864-4736-a71a-3bbc999db29d\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.704587 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f1ec4082-af5d-46ce-a7ca-88091e668a22-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.704630 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f1ec4082-af5d-46ce-a7ca-88091e668a22-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.704694 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f1ec4082-af5d-46ce-a7ca-88091e668a22-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.713640 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.739940 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qlf2w"] Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.806369 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b78f5c0-b665-4723-bddd-e6cccd0fca87-pod-info\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.806436 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f1ec4082-af5d-46ce-a7ca-88091e668a22-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.806462 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b78f5c0-b665-4723-bddd-e6cccd0fca87-server-conf\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.806508 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a1937355-1b88-44b3-a3dc-d50452e62fb2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a1937355-1b88-44b3-a3dc-d50452e62fb2\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.806567 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f1ec4082-af5d-46ce-a7ca-88091e668a22-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.806596 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f1ec4082-af5d-46ce-a7ca-88091e668a22-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.806623 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hftz2\" (UniqueName: \"kubernetes.io/projected/f1ec4082-af5d-46ce-a7ca-88091e668a22-kube-api-access-hftz2\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.806662 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/638a08ec-2f97-4b36-919f-9346af224a16-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.806687 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/638a08ec-2f97-4b36-919f-9346af224a16-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.806997 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f1ec4082-af5d-46ce-a7ca-88091e668a22-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.807056 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/638a08ec-2f97-4b36-919f-9346af224a16-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.807357 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1ec4082-af5d-46ce-a7ca-88091e668a22-config-data\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.807441 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/638a08ec-2f97-4b36-919f-9346af224a16-config-data\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.807478 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f1ec4082-af5d-46ce-a7ca-88091e668a22-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.807502 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/638a08ec-2f97-4b36-919f-9346af224a16-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.807530 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/638a08ec-2f97-4b36-919f-9346af224a16-pod-info\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.807562 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b78f5c0-b665-4723-bddd-e6cccd0fca87-config-data\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.807588 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b78f5c0-b665-4723-bddd-e6cccd0fca87-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.807623 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1daff07d-3864-4736-a71a-3bbc999db29d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1daff07d-3864-4736-a71a-3bbc999db29d\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.807664 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b78f5c0-b665-4723-bddd-e6cccd0fca87-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.807719 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b78f5c0-b665-4723-bddd-e6cccd0fca87-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.807742 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b78f5c0-b665-4723-bddd-e6cccd0fca87-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.807789 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/638a08ec-2f97-4b36-919f-9346af224a16-server-conf\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.807815 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f1ec4082-af5d-46ce-a7ca-88091e668a22-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.807855 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f1ec4082-af5d-46ce-a7ca-88091e668a22-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.807882 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/638a08ec-2f97-4b36-919f-9346af224a16-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.807975 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b78f5c0-b665-4723-bddd-e6cccd0fca87-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.808006 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-365d36af-2f4e-4fca-b720-deca58835685\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-365d36af-2f4e-4fca-b720-deca58835685\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.808057 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f1ec4082-af5d-46ce-a7ca-88091e668a22-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.808100 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f1ec4082-af5d-46ce-a7ca-88091e668a22-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.808142 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmz5g\" (UniqueName: \"kubernetes.io/projected/638a08ec-2f97-4b36-919f-9346af224a16-kube-api-access-vmz5g\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.808158 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1ec4082-af5d-46ce-a7ca-88091e668a22-config-data\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.808200 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/638a08ec-2f97-4b36-919f-9346af224a16-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.808248 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrh5p\" (UniqueName: \"kubernetes.io/projected/9b78f5c0-b665-4723-bddd-e6cccd0fca87-kube-api-access-wrh5p\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.808298 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b78f5c0-b665-4723-bddd-e6cccd0fca87-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.808701 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f1ec4082-af5d-46ce-a7ca-88091e668a22-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.809533 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f1ec4082-af5d-46ce-a7ca-88091e668a22-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.809821 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f1ec4082-af5d-46ce-a7ca-88091e668a22-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.812890 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f1ec4082-af5d-46ce-a7ca-88091e668a22-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.812961 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f1ec4082-af5d-46ce-a7ca-88091e668a22-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.813550 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f1ec4082-af5d-46ce-a7ca-88091e668a22-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.814411 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.814447 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1daff07d-3864-4736-a71a-3bbc999db29d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1daff07d-3864-4736-a71a-3bbc999db29d\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ba39f9798bee59ac311ac153ee53686d284935ab086ba701684d2f8fa8f640f1/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.814696 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f1ec4082-af5d-46ce-a7ca-88091e668a22-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.835652 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hftz2\" (UniqueName: \"kubernetes.io/projected/f1ec4082-af5d-46ce-a7ca-88091e668a22-kube-api-access-hftz2\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.875364 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1daff07d-3864-4736-a71a-3bbc999db29d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1daff07d-3864-4736-a71a-3bbc999db29d\") pod \"rabbitmq-server-0\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.917141 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/638a08ec-2f97-4b36-919f-9346af224a16-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.917201 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b78f5c0-b665-4723-bddd-e6cccd0fca87-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.917226 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-365d36af-2f4e-4fca-b720-deca58835685\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-365d36af-2f4e-4fca-b720-deca58835685\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.917300 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmz5g\" (UniqueName: \"kubernetes.io/projected/638a08ec-2f97-4b36-919f-9346af224a16-kube-api-access-vmz5g\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.917328 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/638a08ec-2f97-4b36-919f-9346af224a16-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.917351 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrh5p\" (UniqueName: \"kubernetes.io/projected/9b78f5c0-b665-4723-bddd-e6cccd0fca87-kube-api-access-wrh5p\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.917376 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b78f5c0-b665-4723-bddd-e6cccd0fca87-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.917428 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b78f5c0-b665-4723-bddd-e6cccd0fca87-pod-info\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.917460 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b78f5c0-b665-4723-bddd-e6cccd0fca87-server-conf\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.917487 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a1937355-1b88-44b3-a3dc-d50452e62fb2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a1937355-1b88-44b3-a3dc-d50452e62fb2\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.917537 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/638a08ec-2f97-4b36-919f-9346af224a16-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.917564 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/638a08ec-2f97-4b36-919f-9346af224a16-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.917611 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/638a08ec-2f97-4b36-919f-9346af224a16-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.917632 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/638a08ec-2f97-4b36-919f-9346af224a16-config-data\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.917660 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/638a08ec-2f97-4b36-919f-9346af224a16-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.917686 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/638a08ec-2f97-4b36-919f-9346af224a16-pod-info\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.917709 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b78f5c0-b665-4723-bddd-e6cccd0fca87-config-data\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.917731 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b78f5c0-b665-4723-bddd-e6cccd0fca87-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.917776 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b78f5c0-b665-4723-bddd-e6cccd0fca87-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.917804 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b78f5c0-b665-4723-bddd-e6cccd0fca87-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.917827 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b78f5c0-b665-4723-bddd-e6cccd0fca87-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.917851 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/638a08ec-2f97-4b36-919f-9346af224a16-server-conf\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.919557 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b78f5c0-b665-4723-bddd-e6cccd0fca87-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.919798 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/638a08ec-2f97-4b36-919f-9346af224a16-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.920351 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/638a08ec-2f97-4b36-919f-9346af224a16-server-conf\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.920976 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b78f5c0-b665-4723-bddd-e6cccd0fca87-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.921756 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/638a08ec-2f97-4b36-919f-9346af224a16-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.921825 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b78f5c0-b665-4723-bddd-e6cccd0fca87-server-conf\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.921836 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/638a08ec-2f97-4b36-919f-9346af224a16-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.922781 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b78f5c0-b665-4723-bddd-e6cccd0fca87-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.923161 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.923525 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/638a08ec-2f97-4b36-919f-9346af224a16-config-data\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.924005 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b78f5c0-b665-4723-bddd-e6cccd0fca87-config-data\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.926667 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b78f5c0-b665-4723-bddd-e6cccd0fca87-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.927308 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/638a08ec-2f97-4b36-919f-9346af224a16-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.927731 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b78f5c0-b665-4723-bddd-e6cccd0fca87-pod-info\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.929116 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/638a08ec-2f97-4b36-919f-9346af224a16-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.930084 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/638a08ec-2f97-4b36-919f-9346af224a16-pod-info\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.931720 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.931753 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a1937355-1b88-44b3-a3dc-d50452e62fb2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a1937355-1b88-44b3-a3dc-d50452e62fb2\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f729d16b313da05fd19265a2c348290c58a3098859ab4c087c185b1545dd7ea/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.932249 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.932293 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-365d36af-2f4e-4fca-b720-deca58835685\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-365d36af-2f4e-4fca-b720-deca58835685\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7e5ae1e34e02febc8536f339c4f3cd95210d92fb100b4ae6e8cc0017ada2a80f/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.935914 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/638a08ec-2f97-4b36-919f-9346af224a16-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.937471 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b78f5c0-b665-4723-bddd-e6cccd0fca87-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.946178 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmz5g\" (UniqueName: \"kubernetes.io/projected/638a08ec-2f97-4b36-919f-9346af224a16-kube-api-access-vmz5g\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.954514 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b78f5c0-b665-4723-bddd-e6cccd0fca87-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.955043 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrh5p\" (UniqueName: \"kubernetes.io/projected/9b78f5c0-b665-4723-bddd-e6cccd0fca87-kube-api-access-wrh5p\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.986852 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.989874 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.996990 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.996996 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.997049 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.997120 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.997180 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.997316 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bfnxg" Feb 19 21:18:17 crc kubenswrapper[4886]: I0219 21:18:17.997482 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.004173 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.028654 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-365d36af-2f4e-4fca-b720-deca58835685\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-365d36af-2f4e-4fca-b720-deca58835685\") pod \"rabbitmq-server-2\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " pod="openstack/rabbitmq-server-2" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.062615 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a1937355-1b88-44b3-a3dc-d50452e62fb2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a1937355-1b88-44b3-a3dc-d50452e62fb2\") pod \"rabbitmq-server-1\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " pod="openstack/rabbitmq-server-1" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.081971 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qlf2w" event={"ID":"d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81","Type":"ContainerStarted","Data":"fe287496c8a5ae5d5dc281554882b98073621e72b6ae1b29a5ab239f3b9bf857"} Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.083596 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-j88cw" event={"ID":"8902da83-187f-4cc3-8486-3d7ed6625bdb","Type":"ContainerStarted","Data":"e89819363709fab0722953ee2ce0e209e45d78d9afe23a63b2375b54167db79f"} Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.128892 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8965c421-86d6-4ed1-97a8-9d5923d37c1b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8965c421-86d6-4ed1-97a8-9d5923d37c1b\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.128956 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c4270056-5929-46be-bced-090af7fb6761-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.129059 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c4270056-5929-46be-bced-090af7fb6761-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.129131 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bm9b\" (UniqueName: \"kubernetes.io/projected/c4270056-5929-46be-bced-090af7fb6761-kube-api-access-2bm9b\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.129160 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c4270056-5929-46be-bced-090af7fb6761-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.129181 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c4270056-5929-46be-bced-090af7fb6761-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.129281 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c4270056-5929-46be-bced-090af7fb6761-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.129339 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c4270056-5929-46be-bced-090af7fb6761-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.129393 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4270056-5929-46be-bced-090af7fb6761-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.129416 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c4270056-5929-46be-bced-090af7fb6761-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.129554 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c4270056-5929-46be-bced-090af7fb6761-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.231290 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8965c421-86d6-4ed1-97a8-9d5923d37c1b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8965c421-86d6-4ed1-97a8-9d5923d37c1b\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.231338 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c4270056-5929-46be-bced-090af7fb6761-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.231400 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c4270056-5929-46be-bced-090af7fb6761-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.231417 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bm9b\" (UniqueName: \"kubernetes.io/projected/c4270056-5929-46be-bced-090af7fb6761-kube-api-access-2bm9b\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.231437 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c4270056-5929-46be-bced-090af7fb6761-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.231453 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c4270056-5929-46be-bced-090af7fb6761-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.231475 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c4270056-5929-46be-bced-090af7fb6761-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.231498 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c4270056-5929-46be-bced-090af7fb6761-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.231520 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4270056-5929-46be-bced-090af7fb6761-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.231533 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c4270056-5929-46be-bced-090af7fb6761-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.231575 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c4270056-5929-46be-bced-090af7fb6761-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.232773 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c4270056-5929-46be-bced-090af7fb6761-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.233620 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c4270056-5929-46be-bced-090af7fb6761-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.233636 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c4270056-5929-46be-bced-090af7fb6761-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.233972 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c4270056-5929-46be-bced-090af7fb6761-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.234466 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4270056-5929-46be-bced-090af7fb6761-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.236018 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c4270056-5929-46be-bced-090af7fb6761-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.236175 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.236212 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8965c421-86d6-4ed1-97a8-9d5923d37c1b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8965c421-86d6-4ed1-97a8-9d5923d37c1b\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e23f42a41100c932cf91673bcb2dafbf5a7af374281addd12c20957e79bb03b2/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.236447 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c4270056-5929-46be-bced-090af7fb6761-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.238702 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c4270056-5929-46be-bced-090af7fb6761-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.238880 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c4270056-5929-46be-bced-090af7fb6761-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.252586 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bm9b\" (UniqueName: \"kubernetes.io/projected/c4270056-5929-46be-bced-090af7fb6761-kube-api-access-2bm9b\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.267879 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8965c421-86d6-4ed1-97a8-9d5923d37c1b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8965c421-86d6-4ed1-97a8-9d5923d37c1b\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.295309 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.325604 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.325661 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.325712 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.326980 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"07418cd70dea73874048c57bcddf9f82d5a0a608008d842b73583b9e639a54ec"} pod="openshift-machine-config-operator/machine-config-daemon-6stm5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.327071 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" containerID="cri-o://07418cd70dea73874048c57bcddf9f82d5a0a608008d842b73583b9e639a54ec" gracePeriod=600 Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.338279 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:18:18 crc kubenswrapper[4886]: I0219 21:18:18.351211 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.002798 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.005318 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.007858 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-bfpzm" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.008053 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.009709 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.009855 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.018573 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.022580 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.099746 4886 generic.go:334] "Generic (PLEG): container finished" podID="b096c32d-4192-4529-bc55-b05d09004007" containerID="07418cd70dea73874048c57bcddf9f82d5a0a608008d842b73583b9e639a54ec" exitCode=0 Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.099785 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerDied","Data":"07418cd70dea73874048c57bcddf9f82d5a0a608008d842b73583b9e639a54ec"} Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.099814 4886 scope.go:117] "RemoveContainer" containerID="8df8acb7039f9357ec20617d0239697fac24843c97f7ed406d68afe9849d624e" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.146671 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3-config-data-default\") pod \"openstack-galera-0\" (UID: \"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3\") " pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.146733 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3\") " pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.146758 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwpm2\" (UniqueName: \"kubernetes.io/projected/a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3-kube-api-access-kwpm2\") pod \"openstack-galera-0\" (UID: \"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3\") " pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.146778 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3-kolla-config\") pod \"openstack-galera-0\" (UID: \"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3\") " pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.146806 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d60dc184-0451-4d3f-9b8b-3d6124456368\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d60dc184-0451-4d3f-9b8b-3d6124456368\") pod \"openstack-galera-0\" (UID: \"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3\") " pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.146832 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3\") " pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.146875 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3\") " pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.146903 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3\") " pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.250236 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3-config-data-default\") pod \"openstack-galera-0\" (UID: \"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3\") " pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.250353 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3\") " pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.250408 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwpm2\" (UniqueName: \"kubernetes.io/projected/a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3-kube-api-access-kwpm2\") pod \"openstack-galera-0\" (UID: \"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3\") " pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.250442 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3-kolla-config\") pod \"openstack-galera-0\" (UID: \"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3\") " pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.250508 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d60dc184-0451-4d3f-9b8b-3d6124456368\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d60dc184-0451-4d3f-9b8b-3d6124456368\") pod \"openstack-galera-0\" (UID: \"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3\") " pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.250562 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3\") " pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.250678 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3\") " pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.250747 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3\") " pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.252371 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3-config-data-default\") pod \"openstack-galera-0\" (UID: \"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3\") " pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.252740 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3\") " pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.253962 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3-kolla-config\") pod \"openstack-galera-0\" (UID: \"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3\") " pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.256775 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3\") " pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.263326 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3\") " pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.276736 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3\") " pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.281174 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.282001 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d60dc184-0451-4d3f-9b8b-3d6124456368\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d60dc184-0451-4d3f-9b8b-3d6124456368\") pod \"openstack-galera-0\" (UID: \"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/243d0947ff2f007cc82fd89e71103efce42a895ed5bea9f86e5734aacd6caf6c/globalmount\"" pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.300935 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwpm2\" (UniqueName: \"kubernetes.io/projected/a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3-kube-api-access-kwpm2\") pod \"openstack-galera-0\" (UID: \"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3\") " pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.352858 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d60dc184-0451-4d3f-9b8b-3d6124456368\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d60dc184-0451-4d3f-9b8b-3d6124456368\") pod \"openstack-galera-0\" (UID: \"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3\") " pod="openstack/openstack-galera-0" Feb 19 21:18:19 crc kubenswrapper[4886]: I0219 21:18:19.634755 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.436735 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.438219 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.443244 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.443446 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-vmsv5" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.443566 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.444879 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.450907 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.552882 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.554387 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.557340 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.557631 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-vxs6d" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.561682 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.566437 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.592702 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t249h\" (UniqueName: \"kubernetes.io/projected/aeb6523b-7fed-4c9a-87c2-b531f22c9a1c-kube-api-access-t249h\") pod \"openstack-cell1-galera-0\" (UID: \"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.592763 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aeb6523b-7fed-4c9a-87c2-b531f22c9a1c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.592814 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/aeb6523b-7fed-4c9a-87c2-b531f22c9a1c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.592856 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb6523b-7fed-4c9a-87c2-b531f22c9a1c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.592944 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-34cc948a-cf5a-4541-a5c9-a260f77025cf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-34cc948a-cf5a-4541-a5c9-a260f77025cf\") pod \"openstack-cell1-galera-0\" (UID: \"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.593075 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/aeb6523b-7fed-4c9a-87c2-b531f22c9a1c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.593143 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aeb6523b-7fed-4c9a-87c2-b531f22c9a1c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.593172 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/aeb6523b-7fed-4c9a-87c2-b531f22c9a1c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.695553 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/aeb6523b-7fed-4c9a-87c2-b531f22c9a1c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.695666 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aeb6523b-7fed-4c9a-87c2-b531f22c9a1c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.695693 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/aeb6523b-7fed-4c9a-87c2-b531f22c9a1c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.695722 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t249h\" (UniqueName: \"kubernetes.io/projected/aeb6523b-7fed-4c9a-87c2-b531f22c9a1c-kube-api-access-t249h\") pod \"openstack-cell1-galera-0\" (UID: \"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.695831 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aeb6523b-7fed-4c9a-87c2-b531f22c9a1c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.695905 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stps7\" (UniqueName: \"kubernetes.io/projected/0101f511-61b8-4c7c-ac46-5b0eaf73e3fe-kube-api-access-stps7\") pod \"memcached-0\" (UID: \"0101f511-61b8-4c7c-ac46-5b0eaf73e3fe\") " pod="openstack/memcached-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.695997 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/aeb6523b-7fed-4c9a-87c2-b531f22c9a1c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.696066 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0101f511-61b8-4c7c-ac46-5b0eaf73e3fe-kolla-config\") pod \"memcached-0\" (UID: \"0101f511-61b8-4c7c-ac46-5b0eaf73e3fe\") " pod="openstack/memcached-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.696141 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0101f511-61b8-4c7c-ac46-5b0eaf73e3fe-config-data\") pod \"memcached-0\" (UID: \"0101f511-61b8-4c7c-ac46-5b0eaf73e3fe\") " pod="openstack/memcached-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.696172 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb6523b-7fed-4c9a-87c2-b531f22c9a1c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.696198 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0101f511-61b8-4c7c-ac46-5b0eaf73e3fe-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0101f511-61b8-4c7c-ac46-5b0eaf73e3fe\") " pod="openstack/memcached-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.696227 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-34cc948a-cf5a-4541-a5c9-a260f77025cf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-34cc948a-cf5a-4541-a5c9-a260f77025cf\") pod \"openstack-cell1-galera-0\" (UID: \"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.696273 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0101f511-61b8-4c7c-ac46-5b0eaf73e3fe-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0101f511-61b8-4c7c-ac46-5b0eaf73e3fe\") " pod="openstack/memcached-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.697919 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/aeb6523b-7fed-4c9a-87c2-b531f22c9a1c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.698421 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/aeb6523b-7fed-4c9a-87c2-b531f22c9a1c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.699341 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aeb6523b-7fed-4c9a-87c2-b531f22c9a1c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.699770 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aeb6523b-7fed-4c9a-87c2-b531f22c9a1c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.703215 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb6523b-7fed-4c9a-87c2-b531f22c9a1c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.704828 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.704858 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-34cc948a-cf5a-4541-a5c9-a260f77025cf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-34cc948a-cf5a-4541-a5c9-a260f77025cf\") pod \"openstack-cell1-galera-0\" (UID: \"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/144e4c3a4e5dc053d2d0312c5eae2fb948e89bdcbe051346378b743712edf3d6/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.705401 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/aeb6523b-7fed-4c9a-87c2-b531f22c9a1c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.732962 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t249h\" (UniqueName: \"kubernetes.io/projected/aeb6523b-7fed-4c9a-87c2-b531f22c9a1c-kube-api-access-t249h\") pod \"openstack-cell1-galera-0\" (UID: \"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.759118 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-34cc948a-cf5a-4541-a5c9-a260f77025cf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-34cc948a-cf5a-4541-a5c9-a260f77025cf\") pod \"openstack-cell1-galera-0\" (UID: \"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.799272 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stps7\" (UniqueName: \"kubernetes.io/projected/0101f511-61b8-4c7c-ac46-5b0eaf73e3fe-kube-api-access-stps7\") pod \"memcached-0\" (UID: \"0101f511-61b8-4c7c-ac46-5b0eaf73e3fe\") " pod="openstack/memcached-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.799337 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0101f511-61b8-4c7c-ac46-5b0eaf73e3fe-kolla-config\") pod \"memcached-0\" (UID: \"0101f511-61b8-4c7c-ac46-5b0eaf73e3fe\") " pod="openstack/memcached-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.799364 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0101f511-61b8-4c7c-ac46-5b0eaf73e3fe-config-data\") pod \"memcached-0\" (UID: \"0101f511-61b8-4c7c-ac46-5b0eaf73e3fe\") " pod="openstack/memcached-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.799398 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0101f511-61b8-4c7c-ac46-5b0eaf73e3fe-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0101f511-61b8-4c7c-ac46-5b0eaf73e3fe\") " pod="openstack/memcached-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.799454 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0101f511-61b8-4c7c-ac46-5b0eaf73e3fe-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0101f511-61b8-4c7c-ac46-5b0eaf73e3fe\") " pod="openstack/memcached-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.800133 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0101f511-61b8-4c7c-ac46-5b0eaf73e3fe-kolla-config\") pod \"memcached-0\" (UID: \"0101f511-61b8-4c7c-ac46-5b0eaf73e3fe\") " pod="openstack/memcached-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.800340 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0101f511-61b8-4c7c-ac46-5b0eaf73e3fe-config-data\") pod \"memcached-0\" (UID: \"0101f511-61b8-4c7c-ac46-5b0eaf73e3fe\") " pod="openstack/memcached-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.803337 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0101f511-61b8-4c7c-ac46-5b0eaf73e3fe-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0101f511-61b8-4c7c-ac46-5b0eaf73e3fe\") " pod="openstack/memcached-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.805836 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0101f511-61b8-4c7c-ac46-5b0eaf73e3fe-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0101f511-61b8-4c7c-ac46-5b0eaf73e3fe\") " pod="openstack/memcached-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.812238 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.817867 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stps7\" (UniqueName: \"kubernetes.io/projected/0101f511-61b8-4c7c-ac46-5b0eaf73e3fe-kube-api-access-stps7\") pod \"memcached-0\" (UID: \"0101f511-61b8-4c7c-ac46-5b0eaf73e3fe\") " pod="openstack/memcached-0" Feb 19 21:18:20 crc kubenswrapper[4886]: I0219 21:18:20.871418 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 21:18:23 crc kubenswrapper[4886]: I0219 21:18:23.285548 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 21:18:23 crc kubenswrapper[4886]: I0219 21:18:23.287124 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 21:18:23 crc kubenswrapper[4886]: I0219 21:18:23.289474 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-wfm67" Feb 19 21:18:23 crc kubenswrapper[4886]: I0219 21:18:23.303764 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 21:18:23 crc kubenswrapper[4886]: I0219 21:18:23.350588 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2r9n\" (UniqueName: \"kubernetes.io/projected/517c324e-c2e0-4775-83fd-c9b811305eb0-kube-api-access-m2r9n\") pod \"kube-state-metrics-0\" (UID: \"517c324e-c2e0-4775-83fd-c9b811305eb0\") " pod="openstack/kube-state-metrics-0" Feb 19 21:18:23 crc kubenswrapper[4886]: I0219 21:18:23.452455 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2r9n\" (UniqueName: \"kubernetes.io/projected/517c324e-c2e0-4775-83fd-c9b811305eb0-kube-api-access-m2r9n\") pod \"kube-state-metrics-0\" (UID: \"517c324e-c2e0-4775-83fd-c9b811305eb0\") " pod="openstack/kube-state-metrics-0" Feb 19 21:18:23 crc kubenswrapper[4886]: I0219 21:18:23.518443 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2r9n\" (UniqueName: \"kubernetes.io/projected/517c324e-c2e0-4775-83fd-c9b811305eb0-kube-api-access-m2r9n\") pod \"kube-state-metrics-0\" (UID: \"517c324e-c2e0-4775-83fd-c9b811305eb0\") " pod="openstack/kube-state-metrics-0" Feb 19 21:18:23 crc kubenswrapper[4886]: I0219 21:18:23.603804 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.107154 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-7877m"] Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.108686 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-7877m" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.114735 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.114951 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-58pvr" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.117360 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-7877m"] Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.164677 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20eb82ad-3fa3-48bb-bddd-f479d3f0ac68-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-7877m\" (UID: \"20eb82ad-3fa3-48bb-bddd-f479d3f0ac68\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-7877m" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.164777 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqft6\" (UniqueName: \"kubernetes.io/projected/20eb82ad-3fa3-48bb-bddd-f479d3f0ac68-kube-api-access-zqft6\") pod \"observability-ui-dashboards-66cbf594b5-7877m\" (UID: \"20eb82ad-3fa3-48bb-bddd-f479d3f0ac68\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-7877m" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.266706 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqft6\" (UniqueName: \"kubernetes.io/projected/20eb82ad-3fa3-48bb-bddd-f479d3f0ac68-kube-api-access-zqft6\") pod \"observability-ui-dashboards-66cbf594b5-7877m\" (UID: \"20eb82ad-3fa3-48bb-bddd-f479d3f0ac68\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-7877m" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.266842 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20eb82ad-3fa3-48bb-bddd-f479d3f0ac68-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-7877m\" (UID: \"20eb82ad-3fa3-48bb-bddd-f479d3f0ac68\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-7877m" Feb 19 21:18:24 crc kubenswrapper[4886]: E0219 21:18:24.266996 4886 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Feb 19 21:18:24 crc kubenswrapper[4886]: E0219 21:18:24.267082 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20eb82ad-3fa3-48bb-bddd-f479d3f0ac68-serving-cert podName:20eb82ad-3fa3-48bb-bddd-f479d3f0ac68 nodeName:}" failed. No retries permitted until 2026-02-19 21:18:24.767058297 +0000 UTC m=+1135.394901347 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/20eb82ad-3fa3-48bb-bddd-f479d3f0ac68-serving-cert") pod "observability-ui-dashboards-66cbf594b5-7877m" (UID: "20eb82ad-3fa3-48bb-bddd-f479d3f0ac68") : secret "observability-ui-dashboards" not found Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.322233 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqft6\" (UniqueName: \"kubernetes.io/projected/20eb82ad-3fa3-48bb-bddd-f479d3f0ac68-kube-api-access-zqft6\") pod \"observability-ui-dashboards-66cbf594b5-7877m\" (UID: \"20eb82ad-3fa3-48bb-bddd-f479d3f0ac68\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-7877m" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.531670 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6dd97696d9-fc9t4"] Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.533152 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.571418 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6dd97696d9-fc9t4"] Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.572027 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19543fcd-426f-4e08-91d1-02e568aa31d8-trusted-ca-bundle\") pod \"console-6dd97696d9-fc9t4\" (UID: \"19543fcd-426f-4e08-91d1-02e568aa31d8\") " pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.572090 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19543fcd-426f-4e08-91d1-02e568aa31d8-service-ca\") pod \"console-6dd97696d9-fc9t4\" (UID: \"19543fcd-426f-4e08-91d1-02e568aa31d8\") " pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.572126 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19543fcd-426f-4e08-91d1-02e568aa31d8-console-config\") pod \"console-6dd97696d9-fc9t4\" (UID: \"19543fcd-426f-4e08-91d1-02e568aa31d8\") " pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.572521 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19543fcd-426f-4e08-91d1-02e568aa31d8-oauth-serving-cert\") pod \"console-6dd97696d9-fc9t4\" (UID: \"19543fcd-426f-4e08-91d1-02e568aa31d8\") " pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.572602 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19543fcd-426f-4e08-91d1-02e568aa31d8-console-oauth-config\") pod \"console-6dd97696d9-fc9t4\" (UID: \"19543fcd-426f-4e08-91d1-02e568aa31d8\") " pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.572627 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19543fcd-426f-4e08-91d1-02e568aa31d8-console-serving-cert\") pod \"console-6dd97696d9-fc9t4\" (UID: \"19543fcd-426f-4e08-91d1-02e568aa31d8\") " pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.572697 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbx2n\" (UniqueName: \"kubernetes.io/projected/19543fcd-426f-4e08-91d1-02e568aa31d8-kube-api-access-zbx2n\") pod \"console-6dd97696d9-fc9t4\" (UID: \"19543fcd-426f-4e08-91d1-02e568aa31d8\") " pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.631834 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.634115 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.643298 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.643434 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.643535 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.643639 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.644287 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.644415 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.646682 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-jvtfj" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.650028 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.656637 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.675298 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8cccf7aa-d200-47d7-96d4-cf6b048f966e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.675438 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19543fcd-426f-4e08-91d1-02e568aa31d8-oauth-serving-cert\") pod \"console-6dd97696d9-fc9t4\" (UID: \"19543fcd-426f-4e08-91d1-02e568aa31d8\") " pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.676368 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19543fcd-426f-4e08-91d1-02e568aa31d8-oauth-serving-cert\") pod \"console-6dd97696d9-fc9t4\" (UID: \"19543fcd-426f-4e08-91d1-02e568aa31d8\") " pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.676711 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcprz\" (UniqueName: \"kubernetes.io/projected/8cccf7aa-d200-47d7-96d4-cf6b048f966e-kube-api-access-qcprz\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.676871 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8cccf7aa-d200-47d7-96d4-cf6b048f966e-config\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.676974 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19543fcd-426f-4e08-91d1-02e568aa31d8-console-oauth-config\") pod \"console-6dd97696d9-fc9t4\" (UID: \"19543fcd-426f-4e08-91d1-02e568aa31d8\") " pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.677040 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19543fcd-426f-4e08-91d1-02e568aa31d8-console-serving-cert\") pod \"console-6dd97696d9-fc9t4\" (UID: \"19543fcd-426f-4e08-91d1-02e568aa31d8\") " pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.677741 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8cccf7aa-d200-47d7-96d4-cf6b048f966e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.677838 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8cccf7aa-d200-47d7-96d4-cf6b048f966e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.677877 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbx2n\" (UniqueName: \"kubernetes.io/projected/19543fcd-426f-4e08-91d1-02e568aa31d8-kube-api-access-zbx2n\") pod \"console-6dd97696d9-fc9t4\" (UID: \"19543fcd-426f-4e08-91d1-02e568aa31d8\") " pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.677957 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5b2e059c-8bc1-4af3-a4f2-656b79634a4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5b2e059c-8bc1-4af3-a4f2-656b79634a4e\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.677984 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8cccf7aa-d200-47d7-96d4-cf6b048f966e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.678083 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8cccf7aa-d200-47d7-96d4-cf6b048f966e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.678127 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19543fcd-426f-4e08-91d1-02e568aa31d8-trusted-ca-bundle\") pod \"console-6dd97696d9-fc9t4\" (UID: \"19543fcd-426f-4e08-91d1-02e568aa31d8\") " pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.678190 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19543fcd-426f-4e08-91d1-02e568aa31d8-service-ca\") pod \"console-6dd97696d9-fc9t4\" (UID: \"19543fcd-426f-4e08-91d1-02e568aa31d8\") " pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.678232 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8cccf7aa-d200-47d7-96d4-cf6b048f966e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.678283 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19543fcd-426f-4e08-91d1-02e568aa31d8-console-config\") pod \"console-6dd97696d9-fc9t4\" (UID: \"19543fcd-426f-4e08-91d1-02e568aa31d8\") " pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.678358 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8cccf7aa-d200-47d7-96d4-cf6b048f966e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.679299 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19543fcd-426f-4e08-91d1-02e568aa31d8-console-config\") pod \"console-6dd97696d9-fc9t4\" (UID: \"19543fcd-426f-4e08-91d1-02e568aa31d8\") " pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.679808 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19543fcd-426f-4e08-91d1-02e568aa31d8-service-ca\") pod \"console-6dd97696d9-fc9t4\" (UID: \"19543fcd-426f-4e08-91d1-02e568aa31d8\") " pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.680438 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19543fcd-426f-4e08-91d1-02e568aa31d8-trusted-ca-bundle\") pod \"console-6dd97696d9-fc9t4\" (UID: \"19543fcd-426f-4e08-91d1-02e568aa31d8\") " pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.682425 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19543fcd-426f-4e08-91d1-02e568aa31d8-console-oauth-config\") pod \"console-6dd97696d9-fc9t4\" (UID: \"19543fcd-426f-4e08-91d1-02e568aa31d8\") " pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.696489 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19543fcd-426f-4e08-91d1-02e568aa31d8-console-serving-cert\") pod \"console-6dd97696d9-fc9t4\" (UID: \"19543fcd-426f-4e08-91d1-02e568aa31d8\") " pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.710356 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbx2n\" (UniqueName: \"kubernetes.io/projected/19543fcd-426f-4e08-91d1-02e568aa31d8-kube-api-access-zbx2n\") pod \"console-6dd97696d9-fc9t4\" (UID: \"19543fcd-426f-4e08-91d1-02e568aa31d8\") " pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.796619 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8cccf7aa-d200-47d7-96d4-cf6b048f966e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.796682 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20eb82ad-3fa3-48bb-bddd-f479d3f0ac68-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-7877m\" (UID: \"20eb82ad-3fa3-48bb-bddd-f479d3f0ac68\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-7877m" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.796734 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8cccf7aa-d200-47d7-96d4-cf6b048f966e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.796826 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8cccf7aa-d200-47d7-96d4-cf6b048f966e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.796854 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5b2e059c-8bc1-4af3-a4f2-656b79634a4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5b2e059c-8bc1-4af3-a4f2-656b79634a4e\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.796943 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8cccf7aa-d200-47d7-96d4-cf6b048f966e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.797006 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8cccf7aa-d200-47d7-96d4-cf6b048f966e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.797076 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8cccf7aa-d200-47d7-96d4-cf6b048f966e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.797134 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8cccf7aa-d200-47d7-96d4-cf6b048f966e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.797199 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcprz\" (UniqueName: \"kubernetes.io/projected/8cccf7aa-d200-47d7-96d4-cf6b048f966e-kube-api-access-qcprz\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.797253 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8cccf7aa-d200-47d7-96d4-cf6b048f966e-config\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.802338 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8cccf7aa-d200-47d7-96d4-cf6b048f966e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.804181 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8cccf7aa-d200-47d7-96d4-cf6b048f966e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.805013 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8cccf7aa-d200-47d7-96d4-cf6b048f966e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.805406 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8cccf7aa-d200-47d7-96d4-cf6b048f966e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.808302 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.808475 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5b2e059c-8bc1-4af3-a4f2-656b79634a4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5b2e059c-8bc1-4af3-a4f2-656b79634a4e\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4b24d278a8e1561de69dc574b3cb2f98927434e79938fd0747c818d910fbafa9/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.809146 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8cccf7aa-d200-47d7-96d4-cf6b048f966e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.809450 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8cccf7aa-d200-47d7-96d4-cf6b048f966e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.817595 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8cccf7aa-d200-47d7-96d4-cf6b048f966e-config\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.824331 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8cccf7aa-d200-47d7-96d4-cf6b048f966e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.834014 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20eb82ad-3fa3-48bb-bddd-f479d3f0ac68-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-7877m\" (UID: \"20eb82ad-3fa3-48bb-bddd-f479d3f0ac68\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-7877m" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.834760 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcprz\" (UniqueName: \"kubernetes.io/projected/8cccf7aa-d200-47d7-96d4-cf6b048f966e-kube-api-access-qcprz\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.853073 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.867737 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5b2e059c-8bc1-4af3-a4f2-656b79634a4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5b2e059c-8bc1-4af3-a4f2-656b79634a4e\") pod \"prometheus-metric-storage-0\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:24 crc kubenswrapper[4886]: I0219 21:18:24.969378 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 21:18:25 crc kubenswrapper[4886]: I0219 21:18:25.034564 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-7877m" Feb 19 21:18:25 crc kubenswrapper[4886]: I0219 21:18:25.855529 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-dz5ch"] Feb 19 21:18:25 crc kubenswrapper[4886]: I0219 21:18:25.857035 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dz5ch" Feb 19 21:18:25 crc kubenswrapper[4886]: I0219 21:18:25.860728 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 19 21:18:25 crc kubenswrapper[4886]: I0219 21:18:25.861301 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 19 21:18:25 crc kubenswrapper[4886]: I0219 21:18:25.861961 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-pmfz4" Feb 19 21:18:25 crc kubenswrapper[4886]: I0219 21:18:25.867419 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-4zsb6"] Feb 19 21:18:25 crc kubenswrapper[4886]: I0219 21:18:25.870083 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4zsb6" Feb 19 21:18:25 crc kubenswrapper[4886]: I0219 21:18:25.881716 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dz5ch"] Feb 19 21:18:25 crc kubenswrapper[4886]: I0219 21:18:25.891758 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-4zsb6"] Feb 19 21:18:25 crc kubenswrapper[4886]: I0219 21:18:25.934624 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33a5960a-9113-480d-ace4-669cf0c35e34-scripts\") pod \"ovn-controller-ovs-4zsb6\" (UID: \"33a5960a-9113-480d-ace4-669cf0c35e34\") " pod="openstack/ovn-controller-ovs-4zsb6" Feb 19 21:18:25 crc kubenswrapper[4886]: I0219 21:18:25.934913 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/270e230a-d5f1-40ff-968c-cd3a77504bf8-var-run-ovn\") pod \"ovn-controller-dz5ch\" (UID: \"270e230a-d5f1-40ff-968c-cd3a77504bf8\") " pod="openstack/ovn-controller-dz5ch" Feb 19 21:18:25 crc kubenswrapper[4886]: I0219 21:18:25.935015 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/270e230a-d5f1-40ff-968c-cd3a77504bf8-ovn-controller-tls-certs\") pod \"ovn-controller-dz5ch\" (UID: \"270e230a-d5f1-40ff-968c-cd3a77504bf8\") " pod="openstack/ovn-controller-dz5ch" Feb 19 21:18:25 crc kubenswrapper[4886]: I0219 21:18:25.935158 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270e230a-d5f1-40ff-968c-cd3a77504bf8-combined-ca-bundle\") pod \"ovn-controller-dz5ch\" (UID: \"270e230a-d5f1-40ff-968c-cd3a77504bf8\") " pod="openstack/ovn-controller-dz5ch" Feb 19 21:18:25 crc kubenswrapper[4886]: I0219 21:18:25.935304 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/33a5960a-9113-480d-ace4-669cf0c35e34-var-run\") pod \"ovn-controller-ovs-4zsb6\" (UID: \"33a5960a-9113-480d-ace4-669cf0c35e34\") " pod="openstack/ovn-controller-ovs-4zsb6" Feb 19 21:18:25 crc kubenswrapper[4886]: I0219 21:18:25.935397 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/270e230a-d5f1-40ff-968c-cd3a77504bf8-var-run\") pod \"ovn-controller-dz5ch\" (UID: \"270e230a-d5f1-40ff-968c-cd3a77504bf8\") " pod="openstack/ovn-controller-dz5ch" Feb 19 21:18:25 crc kubenswrapper[4886]: I0219 21:18:25.935477 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/270e230a-d5f1-40ff-968c-cd3a77504bf8-var-log-ovn\") pod \"ovn-controller-dz5ch\" (UID: \"270e230a-d5f1-40ff-968c-cd3a77504bf8\") " pod="openstack/ovn-controller-dz5ch" Feb 19 21:18:25 crc kubenswrapper[4886]: I0219 21:18:25.936137 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/33a5960a-9113-480d-ace4-669cf0c35e34-etc-ovs\") pod \"ovn-controller-ovs-4zsb6\" (UID: \"33a5960a-9113-480d-ace4-669cf0c35e34\") " pod="openstack/ovn-controller-ovs-4zsb6" Feb 19 21:18:25 crc kubenswrapper[4886]: I0219 21:18:25.936760 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgsck\" (UniqueName: \"kubernetes.io/projected/33a5960a-9113-480d-ace4-669cf0c35e34-kube-api-access-dgsck\") pod \"ovn-controller-ovs-4zsb6\" (UID: \"33a5960a-9113-480d-ace4-669cf0c35e34\") " pod="openstack/ovn-controller-ovs-4zsb6" Feb 19 21:18:25 crc kubenswrapper[4886]: I0219 21:18:25.936864 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/33a5960a-9113-480d-ace4-669cf0c35e34-var-lib\") pod \"ovn-controller-ovs-4zsb6\" (UID: \"33a5960a-9113-480d-ace4-669cf0c35e34\") " pod="openstack/ovn-controller-ovs-4zsb6" Feb 19 21:18:25 crc kubenswrapper[4886]: I0219 21:18:25.937477 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/33a5960a-9113-480d-ace4-669cf0c35e34-var-log\") pod \"ovn-controller-ovs-4zsb6\" (UID: \"33a5960a-9113-480d-ace4-669cf0c35e34\") " pod="openstack/ovn-controller-ovs-4zsb6" Feb 19 21:18:25 crc kubenswrapper[4886]: I0219 21:18:25.937620 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/270e230a-d5f1-40ff-968c-cd3a77504bf8-scripts\") pod \"ovn-controller-dz5ch\" (UID: \"270e230a-d5f1-40ff-968c-cd3a77504bf8\") " pod="openstack/ovn-controller-dz5ch" Feb 19 21:18:25 crc kubenswrapper[4886]: I0219 21:18:25.937771 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bgnw\" (UniqueName: \"kubernetes.io/projected/270e230a-d5f1-40ff-968c-cd3a77504bf8-kube-api-access-6bgnw\") pod \"ovn-controller-dz5ch\" (UID: \"270e230a-d5f1-40ff-968c-cd3a77504bf8\") " pod="openstack/ovn-controller-dz5ch" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.039408 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/33a5960a-9113-480d-ace4-669cf0c35e34-etc-ovs\") pod \"ovn-controller-ovs-4zsb6\" (UID: \"33a5960a-9113-480d-ace4-669cf0c35e34\") " pod="openstack/ovn-controller-ovs-4zsb6" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.039463 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgsck\" (UniqueName: \"kubernetes.io/projected/33a5960a-9113-480d-ace4-669cf0c35e34-kube-api-access-dgsck\") pod \"ovn-controller-ovs-4zsb6\" (UID: \"33a5960a-9113-480d-ace4-669cf0c35e34\") " pod="openstack/ovn-controller-ovs-4zsb6" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.039491 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/33a5960a-9113-480d-ace4-669cf0c35e34-var-lib\") pod \"ovn-controller-ovs-4zsb6\" (UID: \"33a5960a-9113-480d-ace4-669cf0c35e34\") " pod="openstack/ovn-controller-ovs-4zsb6" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.039557 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/33a5960a-9113-480d-ace4-669cf0c35e34-var-log\") pod \"ovn-controller-ovs-4zsb6\" (UID: \"33a5960a-9113-480d-ace4-669cf0c35e34\") " pod="openstack/ovn-controller-ovs-4zsb6" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.039592 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/270e230a-d5f1-40ff-968c-cd3a77504bf8-scripts\") pod \"ovn-controller-dz5ch\" (UID: \"270e230a-d5f1-40ff-968c-cd3a77504bf8\") " pod="openstack/ovn-controller-dz5ch" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.039641 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bgnw\" (UniqueName: \"kubernetes.io/projected/270e230a-d5f1-40ff-968c-cd3a77504bf8-kube-api-access-6bgnw\") pod \"ovn-controller-dz5ch\" (UID: \"270e230a-d5f1-40ff-968c-cd3a77504bf8\") " pod="openstack/ovn-controller-dz5ch" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.039678 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33a5960a-9113-480d-ace4-669cf0c35e34-scripts\") pod \"ovn-controller-ovs-4zsb6\" (UID: \"33a5960a-9113-480d-ace4-669cf0c35e34\") " pod="openstack/ovn-controller-ovs-4zsb6" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.039699 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/270e230a-d5f1-40ff-968c-cd3a77504bf8-var-run-ovn\") pod \"ovn-controller-dz5ch\" (UID: \"270e230a-d5f1-40ff-968c-cd3a77504bf8\") " pod="openstack/ovn-controller-dz5ch" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.039729 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/270e230a-d5f1-40ff-968c-cd3a77504bf8-ovn-controller-tls-certs\") pod \"ovn-controller-dz5ch\" (UID: \"270e230a-d5f1-40ff-968c-cd3a77504bf8\") " pod="openstack/ovn-controller-dz5ch" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.039789 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270e230a-d5f1-40ff-968c-cd3a77504bf8-combined-ca-bundle\") pod \"ovn-controller-dz5ch\" (UID: \"270e230a-d5f1-40ff-968c-cd3a77504bf8\") " pod="openstack/ovn-controller-dz5ch" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.039807 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/33a5960a-9113-480d-ace4-669cf0c35e34-var-run\") pod \"ovn-controller-ovs-4zsb6\" (UID: \"33a5960a-9113-480d-ace4-669cf0c35e34\") " pod="openstack/ovn-controller-ovs-4zsb6" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.039824 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/270e230a-d5f1-40ff-968c-cd3a77504bf8-var-run\") pod \"ovn-controller-dz5ch\" (UID: \"270e230a-d5f1-40ff-968c-cd3a77504bf8\") " pod="openstack/ovn-controller-dz5ch" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.039843 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/270e230a-d5f1-40ff-968c-cd3a77504bf8-var-log-ovn\") pod \"ovn-controller-dz5ch\" (UID: \"270e230a-d5f1-40ff-968c-cd3a77504bf8\") " pod="openstack/ovn-controller-dz5ch" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.040414 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/33a5960a-9113-480d-ace4-669cf0c35e34-etc-ovs\") pod \"ovn-controller-ovs-4zsb6\" (UID: \"33a5960a-9113-480d-ace4-669cf0c35e34\") " pod="openstack/ovn-controller-ovs-4zsb6" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.040421 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/270e230a-d5f1-40ff-968c-cd3a77504bf8-var-log-ovn\") pod \"ovn-controller-dz5ch\" (UID: \"270e230a-d5f1-40ff-968c-cd3a77504bf8\") " pod="openstack/ovn-controller-dz5ch" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.040664 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/33a5960a-9113-480d-ace4-669cf0c35e34-var-lib\") pod \"ovn-controller-ovs-4zsb6\" (UID: \"33a5960a-9113-480d-ace4-669cf0c35e34\") " pod="openstack/ovn-controller-ovs-4zsb6" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.040741 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/33a5960a-9113-480d-ace4-669cf0c35e34-var-log\") pod \"ovn-controller-ovs-4zsb6\" (UID: \"33a5960a-9113-480d-ace4-669cf0c35e34\") " pod="openstack/ovn-controller-ovs-4zsb6" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.040942 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/270e230a-d5f1-40ff-968c-cd3a77504bf8-var-run-ovn\") pod \"ovn-controller-dz5ch\" (UID: \"270e230a-d5f1-40ff-968c-cd3a77504bf8\") " pod="openstack/ovn-controller-dz5ch" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.041091 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/33a5960a-9113-480d-ace4-669cf0c35e34-var-run\") pod \"ovn-controller-ovs-4zsb6\" (UID: \"33a5960a-9113-480d-ace4-669cf0c35e34\") " pod="openstack/ovn-controller-ovs-4zsb6" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.042010 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/270e230a-d5f1-40ff-968c-cd3a77504bf8-var-run\") pod \"ovn-controller-dz5ch\" (UID: \"270e230a-d5f1-40ff-968c-cd3a77504bf8\") " pod="openstack/ovn-controller-dz5ch" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.042534 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/270e230a-d5f1-40ff-968c-cd3a77504bf8-scripts\") pod \"ovn-controller-dz5ch\" (UID: \"270e230a-d5f1-40ff-968c-cd3a77504bf8\") " pod="openstack/ovn-controller-dz5ch" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.042727 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33a5960a-9113-480d-ace4-669cf0c35e34-scripts\") pod \"ovn-controller-ovs-4zsb6\" (UID: \"33a5960a-9113-480d-ace4-669cf0c35e34\") " pod="openstack/ovn-controller-ovs-4zsb6" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.045615 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/270e230a-d5f1-40ff-968c-cd3a77504bf8-ovn-controller-tls-certs\") pod \"ovn-controller-dz5ch\" (UID: \"270e230a-d5f1-40ff-968c-cd3a77504bf8\") " pod="openstack/ovn-controller-dz5ch" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.045698 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270e230a-d5f1-40ff-968c-cd3a77504bf8-combined-ca-bundle\") pod \"ovn-controller-dz5ch\" (UID: \"270e230a-d5f1-40ff-968c-cd3a77504bf8\") " pod="openstack/ovn-controller-dz5ch" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.060045 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bgnw\" (UniqueName: \"kubernetes.io/projected/270e230a-d5f1-40ff-968c-cd3a77504bf8-kube-api-access-6bgnw\") pod \"ovn-controller-dz5ch\" (UID: \"270e230a-d5f1-40ff-968c-cd3a77504bf8\") " pod="openstack/ovn-controller-dz5ch" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.063751 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgsck\" (UniqueName: \"kubernetes.io/projected/33a5960a-9113-480d-ace4-669cf0c35e34-kube-api-access-dgsck\") pod \"ovn-controller-ovs-4zsb6\" (UID: \"33a5960a-9113-480d-ace4-669cf0c35e34\") " pod="openstack/ovn-controller-ovs-4zsb6" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.180494 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dz5ch" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.189057 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4zsb6" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.507351 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.762465 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.767236 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.770610 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.773028 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.773171 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.773284 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.773375 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-scjwm" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.801060 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.856489 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f51e97c2-10ea-498b-8239-f30033c0069a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f51e97c2-10ea-498b-8239-f30033c0069a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.856583 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65j4g\" (UniqueName: \"kubernetes.io/projected/f51e97c2-10ea-498b-8239-f30033c0069a-kube-api-access-65j4g\") pod \"ovsdbserver-nb-0\" (UID: \"f51e97c2-10ea-498b-8239-f30033c0069a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.856622 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51e97c2-10ea-498b-8239-f30033c0069a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f51e97c2-10ea-498b-8239-f30033c0069a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.856646 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51e97c2-10ea-498b-8239-f30033c0069a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f51e97c2-10ea-498b-8239-f30033c0069a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.856694 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f51e97c2-10ea-498b-8239-f30033c0069a-config\") pod \"ovsdbserver-nb-0\" (UID: \"f51e97c2-10ea-498b-8239-f30033c0069a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.856716 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f51e97c2-10ea-498b-8239-f30033c0069a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f51e97c2-10ea-498b-8239-f30033c0069a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.856750 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7ab42d9f-f595-48c1-af96-547d42a04a03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ab42d9f-f595-48c1-af96-547d42a04a03\") pod \"ovsdbserver-nb-0\" (UID: \"f51e97c2-10ea-498b-8239-f30033c0069a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.857235 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51e97c2-10ea-498b-8239-f30033c0069a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f51e97c2-10ea-498b-8239-f30033c0069a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.958922 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51e97c2-10ea-498b-8239-f30033c0069a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f51e97c2-10ea-498b-8239-f30033c0069a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.959017 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f51e97c2-10ea-498b-8239-f30033c0069a-config\") pod \"ovsdbserver-nb-0\" (UID: \"f51e97c2-10ea-498b-8239-f30033c0069a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.959045 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f51e97c2-10ea-498b-8239-f30033c0069a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f51e97c2-10ea-498b-8239-f30033c0069a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.959081 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7ab42d9f-f595-48c1-af96-547d42a04a03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ab42d9f-f595-48c1-af96-547d42a04a03\") pod \"ovsdbserver-nb-0\" (UID: \"f51e97c2-10ea-498b-8239-f30033c0069a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.959123 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51e97c2-10ea-498b-8239-f30033c0069a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f51e97c2-10ea-498b-8239-f30033c0069a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.959186 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f51e97c2-10ea-498b-8239-f30033c0069a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f51e97c2-10ea-498b-8239-f30033c0069a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.959230 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65j4g\" (UniqueName: \"kubernetes.io/projected/f51e97c2-10ea-498b-8239-f30033c0069a-kube-api-access-65j4g\") pod \"ovsdbserver-nb-0\" (UID: \"f51e97c2-10ea-498b-8239-f30033c0069a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.959256 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51e97c2-10ea-498b-8239-f30033c0069a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f51e97c2-10ea-498b-8239-f30033c0069a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.960455 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f51e97c2-10ea-498b-8239-f30033c0069a-config\") pod \"ovsdbserver-nb-0\" (UID: \"f51e97c2-10ea-498b-8239-f30033c0069a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.960522 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f51e97c2-10ea-498b-8239-f30033c0069a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f51e97c2-10ea-498b-8239-f30033c0069a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.960579 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f51e97c2-10ea-498b-8239-f30033c0069a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f51e97c2-10ea-498b-8239-f30033c0069a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.964346 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51e97c2-10ea-498b-8239-f30033c0069a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f51e97c2-10ea-498b-8239-f30033c0069a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.964931 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.965078 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7ab42d9f-f595-48c1-af96-547d42a04a03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ab42d9f-f595-48c1-af96-547d42a04a03\") pod \"ovsdbserver-nb-0\" (UID: \"f51e97c2-10ea-498b-8239-f30033c0069a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ebc183d47c3bba62cfd388c896467d7314bf083885437395ccea2461651a55ce/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.966249 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51e97c2-10ea-498b-8239-f30033c0069a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f51e97c2-10ea-498b-8239-f30033c0069a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.968093 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51e97c2-10ea-498b-8239-f30033c0069a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f51e97c2-10ea-498b-8239-f30033c0069a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:26 crc kubenswrapper[4886]: I0219 21:18:26.981498 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65j4g\" (UniqueName: \"kubernetes.io/projected/f51e97c2-10ea-498b-8239-f30033c0069a-kube-api-access-65j4g\") pod \"ovsdbserver-nb-0\" (UID: \"f51e97c2-10ea-498b-8239-f30033c0069a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:27 crc kubenswrapper[4886]: I0219 21:18:27.002712 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7ab42d9f-f595-48c1-af96-547d42a04a03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ab42d9f-f595-48c1-af96-547d42a04a03\") pod \"ovsdbserver-nb-0\" (UID: \"f51e97c2-10ea-498b-8239-f30033c0069a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:27 crc kubenswrapper[4886]: I0219 21:18:27.091423 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.191387 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.193559 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.198231 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.198493 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.201452 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.205565 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.212126 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-6g8jv" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.348988 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/16796d4f-3225-41e3-be54-1058492aa1ee-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"16796d4f-3225-41e3-be54-1058492aa1ee\") " pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.349060 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/16796d4f-3225-41e3-be54-1058492aa1ee-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"16796d4f-3225-41e3-be54-1058492aa1ee\") " pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.349090 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6scdb\" (UniqueName: \"kubernetes.io/projected/16796d4f-3225-41e3-be54-1058492aa1ee-kube-api-access-6scdb\") pod \"ovsdbserver-sb-0\" (UID: \"16796d4f-3225-41e3-be54-1058492aa1ee\") " pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.349137 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ceaece87-bdf7-4bd6-bf45-c740c00983b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ceaece87-bdf7-4bd6-bf45-c740c00983b3\") pod \"ovsdbserver-sb-0\" (UID: \"16796d4f-3225-41e3-be54-1058492aa1ee\") " pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.349229 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16796d4f-3225-41e3-be54-1058492aa1ee-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"16796d4f-3225-41e3-be54-1058492aa1ee\") " pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.349308 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16796d4f-3225-41e3-be54-1058492aa1ee-config\") pod \"ovsdbserver-sb-0\" (UID: \"16796d4f-3225-41e3-be54-1058492aa1ee\") " pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.349366 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16796d4f-3225-41e3-be54-1058492aa1ee-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"16796d4f-3225-41e3-be54-1058492aa1ee\") " pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.349480 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/16796d4f-3225-41e3-be54-1058492aa1ee-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"16796d4f-3225-41e3-be54-1058492aa1ee\") " pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.451245 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/16796d4f-3225-41e3-be54-1058492aa1ee-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"16796d4f-3225-41e3-be54-1058492aa1ee\") " pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.451315 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6scdb\" (UniqueName: \"kubernetes.io/projected/16796d4f-3225-41e3-be54-1058492aa1ee-kube-api-access-6scdb\") pod \"ovsdbserver-sb-0\" (UID: \"16796d4f-3225-41e3-be54-1058492aa1ee\") " pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.451368 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ceaece87-bdf7-4bd6-bf45-c740c00983b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ceaece87-bdf7-4bd6-bf45-c740c00983b3\") pod \"ovsdbserver-sb-0\" (UID: \"16796d4f-3225-41e3-be54-1058492aa1ee\") " pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.451459 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16796d4f-3225-41e3-be54-1058492aa1ee-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"16796d4f-3225-41e3-be54-1058492aa1ee\") " pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.451513 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16796d4f-3225-41e3-be54-1058492aa1ee-config\") pod \"ovsdbserver-sb-0\" (UID: \"16796d4f-3225-41e3-be54-1058492aa1ee\") " pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.451571 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16796d4f-3225-41e3-be54-1058492aa1ee-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"16796d4f-3225-41e3-be54-1058492aa1ee\") " pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.451670 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/16796d4f-3225-41e3-be54-1058492aa1ee-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"16796d4f-3225-41e3-be54-1058492aa1ee\") " pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.451716 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/16796d4f-3225-41e3-be54-1058492aa1ee-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"16796d4f-3225-41e3-be54-1058492aa1ee\") " pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.452847 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/16796d4f-3225-41e3-be54-1058492aa1ee-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"16796d4f-3225-41e3-be54-1058492aa1ee\") " pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.453606 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16796d4f-3225-41e3-be54-1058492aa1ee-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"16796d4f-3225-41e3-be54-1058492aa1ee\") " pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.453613 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16796d4f-3225-41e3-be54-1058492aa1ee-config\") pod \"ovsdbserver-sb-0\" (UID: \"16796d4f-3225-41e3-be54-1058492aa1ee\") " pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.459070 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.459119 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ceaece87-bdf7-4bd6-bf45-c740c00983b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ceaece87-bdf7-4bd6-bf45-c740c00983b3\") pod \"ovsdbserver-sb-0\" (UID: \"16796d4f-3225-41e3-be54-1058492aa1ee\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9115b867c5fd5564d53f69a973e2f2a2536eb5c1dc635f13e9d90e7bd62db1b7/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.459488 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16796d4f-3225-41e3-be54-1058492aa1ee-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"16796d4f-3225-41e3-be54-1058492aa1ee\") " pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.460015 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/16796d4f-3225-41e3-be54-1058492aa1ee-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"16796d4f-3225-41e3-be54-1058492aa1ee\") " pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.468540 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/16796d4f-3225-41e3-be54-1058492aa1ee-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"16796d4f-3225-41e3-be54-1058492aa1ee\") " pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.485611 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6scdb\" (UniqueName: \"kubernetes.io/projected/16796d4f-3225-41e3-be54-1058492aa1ee-kube-api-access-6scdb\") pod \"ovsdbserver-sb-0\" (UID: \"16796d4f-3225-41e3-be54-1058492aa1ee\") " pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.523870 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ceaece87-bdf7-4bd6-bf45-c740c00983b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ceaece87-bdf7-4bd6-bf45-c740c00983b3\") pod \"ovsdbserver-sb-0\" (UID: \"16796d4f-3225-41e3-be54-1058492aa1ee\") " pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.533627 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-6g8jv" Feb 19 21:18:30 crc kubenswrapper[4886]: I0219 21:18:30.542704 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:32 crc kubenswrapper[4886]: W0219 21:18:32.890361 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda80ab4bf_a8a5_46c1_8d1b_08e3bb253ae3.slice/crio-c9620a53e19edce350e8f1671679ecb63ae87f636a987fc0f7a053554f94c592 WatchSource:0}: Error finding container c9620a53e19edce350e8f1671679ecb63ae87f636a987fc0f7a053554f94c592: Status 404 returned error can't find the container with id c9620a53e19edce350e8f1671679ecb63ae87f636a987fc0f7a053554f94c592 Feb 19 21:18:33 crc kubenswrapper[4886]: I0219 21:18:33.246073 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 19 21:18:33 crc kubenswrapper[4886]: I0219 21:18:33.250417 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3","Type":"ContainerStarted","Data":"c9620a53e19edce350e8f1671679ecb63ae87f636a987fc0f7a053554f94c592"} Feb 19 21:18:33 crc kubenswrapper[4886]: I0219 21:18:33.350252 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 21:18:34 crc kubenswrapper[4886]: W0219 21:18:34.973365 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod638a08ec_2f97_4b36_919f_9346af224a16.slice/crio-d0953245636e3c75001058e15daa0499c12c0f5cae74c535aa6b04a1c709c1cf WatchSource:0}: Error finding container d0953245636e3c75001058e15daa0499c12c0f5cae74c535aa6b04a1c709c1cf: Status 404 returned error can't find the container with id d0953245636e3c75001058e15daa0499c12c0f5cae74c535aa6b04a1c709c1cf Feb 19 21:18:35 crc kubenswrapper[4886]: E0219 21:18:35.067940 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 19 21:18:35 crc kubenswrapper[4886]: E0219 21:18:35.068109 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jchj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-2kztz_openstack(6862d7c6-a605-46f1-a2dc-9cef73d11b4c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:18:35 crc kubenswrapper[4886]: E0219 21:18:35.069691 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-2kztz" podUID="6862d7c6-a605-46f1-a2dc-9cef73d11b4c" Feb 19 21:18:35 crc kubenswrapper[4886]: E0219 21:18:35.110525 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 19 21:18:35 crc kubenswrapper[4886]: E0219 21:18:35.110665 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gb89s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-7t6rp_openstack(68233b32-e4f1-46e6-ad6e-e6c271e2e02f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:18:35 crc kubenswrapper[4886]: E0219 21:18:35.112167 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-7t6rp" podUID="68233b32-e4f1-46e6-ad6e-e6c271e2e02f" Feb 19 21:18:35 crc kubenswrapper[4886]: I0219 21:18:35.299524 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"638a08ec-2f97-4b36-919f-9346af224a16","Type":"ContainerStarted","Data":"d0953245636e3c75001058e15daa0499c12c0f5cae74c535aa6b04a1c709c1cf"} Feb 19 21:18:35 crc kubenswrapper[4886]: I0219 21:18:35.302256 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f1ec4082-af5d-46ce-a7ca-88091e668a22","Type":"ContainerStarted","Data":"612dcfd532ca33e13d20f1d94ee06ffb69fed04dcf7cfc615b6406193a5f5c22"} Feb 19 21:18:35 crc kubenswrapper[4886]: I0219 21:18:35.467121 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 19 21:18:35 crc kubenswrapper[4886]: I0219 21:18:35.607476 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 21:18:35 crc kubenswrapper[4886]: W0219 21:18:35.628757 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaeb6523b_7fed_4c9a_87c2_b531f22c9a1c.slice/crio-8a732888f09820e5f937824ff5c89be6a39bf6c413218775a88f7a96c2e4d4ca WatchSource:0}: Error finding container 8a732888f09820e5f937824ff5c89be6a39bf6c413218775a88f7a96c2e4d4ca: Status 404 returned error can't find the container with id 8a732888f09820e5f937824ff5c89be6a39bf6c413218775a88f7a96c2e4d4ca Feb 19 21:18:35 crc kubenswrapper[4886]: I0219 21:18:35.924978 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 21:18:35 crc kubenswrapper[4886]: W0219 21:18:35.980856 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0101f511_61b8_4c7c_ac46_5b0eaf73e3fe.slice/crio-2b1592c3d3589816e770550e90d5bd5eb0f3a1e0fac843b1ad3b49a036619aca WatchSource:0}: Error finding container 2b1592c3d3589816e770550e90d5bd5eb0f3a1e0fac843b1ad3b49a036619aca: Status 404 returned error can't find the container with id 2b1592c3d3589816e770550e90d5bd5eb0f3a1e0fac843b1ad3b49a036619aca Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.035882 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7t6rp" Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.192500 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68233b32-e4f1-46e6-ad6e-e6c271e2e02f-config\") pod \"68233b32-e4f1-46e6-ad6e-e6c271e2e02f\" (UID: \"68233b32-e4f1-46e6-ad6e-e6c271e2e02f\") " Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.193122 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb89s\" (UniqueName: \"kubernetes.io/projected/68233b32-e4f1-46e6-ad6e-e6c271e2e02f-kube-api-access-gb89s\") pod \"68233b32-e4f1-46e6-ad6e-e6c271e2e02f\" (UID: \"68233b32-e4f1-46e6-ad6e-e6c271e2e02f\") " Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.193170 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68233b32-e4f1-46e6-ad6e-e6c271e2e02f-dns-svc\") pod \"68233b32-e4f1-46e6-ad6e-e6c271e2e02f\" (UID: \"68233b32-e4f1-46e6-ad6e-e6c271e2e02f\") " Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.194270 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68233b32-e4f1-46e6-ad6e-e6c271e2e02f-config" (OuterVolumeSpecName: "config") pod "68233b32-e4f1-46e6-ad6e-e6c271e2e02f" (UID: "68233b32-e4f1-46e6-ad6e-e6c271e2e02f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.194307 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68233b32-e4f1-46e6-ad6e-e6c271e2e02f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "68233b32-e4f1-46e6-ad6e-e6c271e2e02f" (UID: "68233b32-e4f1-46e6-ad6e-e6c271e2e02f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.211543 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68233b32-e4f1-46e6-ad6e-e6c271e2e02f-kube-api-access-gb89s" (OuterVolumeSpecName: "kube-api-access-gb89s") pod "68233b32-e4f1-46e6-ad6e-e6c271e2e02f" (UID: "68233b32-e4f1-46e6-ad6e-e6c271e2e02f"). InnerVolumeSpecName "kube-api-access-gb89s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.297756 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68233b32-e4f1-46e6-ad6e-e6c271e2e02f-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.297787 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb89s\" (UniqueName: \"kubernetes.io/projected/68233b32-e4f1-46e6-ad6e-e6c271e2e02f-kube-api-access-gb89s\") on node \"crc\" DevicePath \"\"" Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.297800 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68233b32-e4f1-46e6-ad6e-e6c271e2e02f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.316724 4886 generic.go:334] "Generic (PLEG): container finished" podID="d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81" containerID="28d34acfdd74b2a2a419d54937bf0b9febcc2e5c349f7f07a18d7691616cbb1e" exitCode=0 Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.316808 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qlf2w" event={"ID":"d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81","Type":"ContainerDied","Data":"28d34acfdd74b2a2a419d54937bf0b9febcc2e5c349f7f07a18d7691616cbb1e"} Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.323573 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerStarted","Data":"f25290d2bb432baa17ed9b3c0a6cf51aa88d17f019c0cb80644d5e372ee8ab49"} Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.327489 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9b78f5c0-b665-4723-bddd-e6cccd0fca87","Type":"ContainerStarted","Data":"6d09db555acff88a4760fe63c41ebd935387675028bf0726bec583949002e087"} Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.362321 4886 generic.go:334] "Generic (PLEG): container finished" podID="8902da83-187f-4cc3-8486-3d7ed6625bdb" containerID="01fd17a56e880e08461af0f922c032ebae5911d67a0cfcbea7cd7f63acecc3ec" exitCode=0 Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.362539 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-j88cw" event={"ID":"8902da83-187f-4cc3-8486-3d7ed6625bdb","Type":"ContainerDied","Data":"01fd17a56e880e08461af0f922c032ebae5911d67a0cfcbea7cd7f63acecc3ec"} Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.380515 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"0101f511-61b8-4c7c-ac46-5b0eaf73e3fe","Type":"ContainerStarted","Data":"2b1592c3d3589816e770550e90d5bd5eb0f3a1e0fac843b1ad3b49a036619aca"} Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.393126 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7t6rp" event={"ID":"68233b32-e4f1-46e6-ad6e-e6c271e2e02f","Type":"ContainerDied","Data":"01ffed43176aec27d87e63acee20c7826d61cc0518984547c216f48a35686d1a"} Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.393417 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7t6rp" Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.411836 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c","Type":"ContainerStarted","Data":"8a732888f09820e5f937824ff5c89be6a39bf6c413218775a88f7a96c2e4d4ca"} Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.450161 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.473042 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-7877m"] Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.482394 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.494422 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6dd97696d9-fc9t4"] Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.497075 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-2kztz" Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.502669 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 21:18:36 crc kubenswrapper[4886]: W0219 21:18:36.513182 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod517c324e_c2e0_4775_83fd_c9b811305eb0.slice/crio-f610e118982857b1154df7ea5681eb2df4ff08b98bf30930d4749c64611fed25 WatchSource:0}: Error finding container f610e118982857b1154df7ea5681eb2df4ff08b98bf30930d4749c64611fed25: Status 404 returned error can't find the container with id f610e118982857b1154df7ea5681eb2df4ff08b98bf30930d4749c64611fed25 Feb 19 21:18:36 crc kubenswrapper[4886]: W0219 21:18:36.577043 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19543fcd_426f_4e08_91d1_02e568aa31d8.slice/crio-739b93830655c4dd119e6b39452af72e07733bd392e9db8dedbc8bcbc87bb7e6 WatchSource:0}: Error finding container 739b93830655c4dd119e6b39452af72e07733bd392e9db8dedbc8bcbc87bb7e6: Status 404 returned error can't find the container with id 739b93830655c4dd119e6b39452af72e07733bd392e9db8dedbc8bcbc87bb7e6 Feb 19 21:18:36 crc kubenswrapper[4886]: W0219 21:18:36.578614 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4270056_5929_46be_bced_090af7fb6761.slice/crio-6e95c8f6b26d0884009e67178d07c44f76d83fd2e191b561caafd75a0e18839a WatchSource:0}: Error finding container 6e95c8f6b26d0884009e67178d07c44f76d83fd2e191b561caafd75a0e18839a: Status 404 returned error can't find the container with id 6e95c8f6b26d0884009e67178d07c44f76d83fd2e191b561caafd75a0e18839a Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.604729 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jchj\" (UniqueName: \"kubernetes.io/projected/6862d7c6-a605-46f1-a2dc-9cef73d11b4c-kube-api-access-9jchj\") pod \"6862d7c6-a605-46f1-a2dc-9cef73d11b4c\" (UID: \"6862d7c6-a605-46f1-a2dc-9cef73d11b4c\") " Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.604864 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6862d7c6-a605-46f1-a2dc-9cef73d11b4c-config\") pod \"6862d7c6-a605-46f1-a2dc-9cef73d11b4c\" (UID: \"6862d7c6-a605-46f1-a2dc-9cef73d11b4c\") " Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.605691 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6862d7c6-a605-46f1-a2dc-9cef73d11b4c-config" (OuterVolumeSpecName: "config") pod "6862d7c6-a605-46f1-a2dc-9cef73d11b4c" (UID: "6862d7c6-a605-46f1-a2dc-9cef73d11b4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.610439 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6862d7c6-a605-46f1-a2dc-9cef73d11b4c-kube-api-access-9jchj" (OuterVolumeSpecName: "kube-api-access-9jchj") pod "6862d7c6-a605-46f1-a2dc-9cef73d11b4c" (UID: "6862d7c6-a605-46f1-a2dc-9cef73d11b4c"). InnerVolumeSpecName "kube-api-access-9jchj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.685166 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7t6rp"] Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.685201 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7t6rp"] Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.685325 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dz5ch"] Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.707464 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6862d7c6-a605-46f1-a2dc-9cef73d11b4c-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:18:36 crc kubenswrapper[4886]: I0219 21:18:36.707494 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jchj\" (UniqueName: \"kubernetes.io/projected/6862d7c6-a605-46f1-a2dc-9cef73d11b4c-kube-api-access-9jchj\") on node \"crc\" DevicePath \"\"" Feb 19 21:18:36 crc kubenswrapper[4886]: E0219 21:18:36.942478 4886 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 19 21:18:36 crc kubenswrapper[4886]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/8902da83-187f-4cc3-8486-3d7ed6625bdb/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 19 21:18:36 crc kubenswrapper[4886]: > podSandboxID="e89819363709fab0722953ee2ce0e209e45d78d9afe23a63b2375b54167db79f" Feb 19 21:18:36 crc kubenswrapper[4886]: E0219 21:18:36.942904 4886 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 21:18:36 crc kubenswrapper[4886]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qnxgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-j88cw_openstack(8902da83-187f-4cc3-8486-3d7ed6625bdb): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/8902da83-187f-4cc3-8486-3d7ed6625bdb/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 19 21:18:36 crc kubenswrapper[4886]: > logger="UnhandledError" Feb 19 21:18:36 crc kubenswrapper[4886]: E0219 21:18:36.944224 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/8902da83-187f-4cc3-8486-3d7ed6625bdb/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-j88cw" podUID="8902da83-187f-4cc3-8486-3d7ed6625bdb" Feb 19 21:18:37 crc kubenswrapper[4886]: I0219 21:18:37.256109 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-4zsb6"] Feb 19 21:18:37 crc kubenswrapper[4886]: I0219 21:18:37.423518 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-2kztz" Feb 19 21:18:37 crc kubenswrapper[4886]: I0219 21:18:37.423519 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-2kztz" event={"ID":"6862d7c6-a605-46f1-a2dc-9cef73d11b4c","Type":"ContainerDied","Data":"4c9b00f99c55aca381af9ae43c9b0fb2ba2f714cdbe2ba2f141e5e1769ac2766"} Feb 19 21:18:37 crc kubenswrapper[4886]: I0219 21:18:37.425287 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8cccf7aa-d200-47d7-96d4-cf6b048f966e","Type":"ContainerStarted","Data":"6b8f4475273da307d316594838be7feb0516f55c7a912d65a71527f93b1aad03"} Feb 19 21:18:37 crc kubenswrapper[4886]: I0219 21:18:37.427785 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dz5ch" event={"ID":"270e230a-d5f1-40ff-968c-cd3a77504bf8","Type":"ContainerStarted","Data":"1b88d03a2ae757adcaa7500fb563df052a13db1872af477b4ae6dee4c6aaca3e"} Feb 19 21:18:37 crc kubenswrapper[4886]: I0219 21:18:37.428993 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c4270056-5929-46be-bced-090af7fb6761","Type":"ContainerStarted","Data":"6e95c8f6b26d0884009e67178d07c44f76d83fd2e191b561caafd75a0e18839a"} Feb 19 21:18:37 crc kubenswrapper[4886]: I0219 21:18:37.431287 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6dd97696d9-fc9t4" event={"ID":"19543fcd-426f-4e08-91d1-02e568aa31d8","Type":"ContainerStarted","Data":"39253954da1f6231c56a471fda23c5368477ccf3238b4e47fd63411a7f8be1e5"} Feb 19 21:18:37 crc kubenswrapper[4886]: I0219 21:18:37.431329 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6dd97696d9-fc9t4" event={"ID":"19543fcd-426f-4e08-91d1-02e568aa31d8","Type":"ContainerStarted","Data":"739b93830655c4dd119e6b39452af72e07733bd392e9db8dedbc8bcbc87bb7e6"} Feb 19 21:18:37 crc kubenswrapper[4886]: I0219 21:18:37.435158 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qlf2w" event={"ID":"d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81","Type":"ContainerStarted","Data":"f06e4623bc1db0d058a9a4b676f9523dc6f0f47b1f59227bce81fc00fa949c27"} Feb 19 21:18:37 crc kubenswrapper[4886]: I0219 21:18:37.435283 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-qlf2w" Feb 19 21:18:37 crc kubenswrapper[4886]: I0219 21:18:37.436515 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-7877m" event={"ID":"20eb82ad-3fa3-48bb-bddd-f479d3f0ac68","Type":"ContainerStarted","Data":"cbbe03180ed0534fe7af72461f2a76ab17e22babc95028b0f707782e3e234555"} Feb 19 21:18:37 crc kubenswrapper[4886]: I0219 21:18:37.437815 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"517c324e-c2e0-4775-83fd-c9b811305eb0","Type":"ContainerStarted","Data":"f610e118982857b1154df7ea5681eb2df4ff08b98bf30930d4749c64611fed25"} Feb 19 21:18:37 crc kubenswrapper[4886]: I0219 21:18:37.442054 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4zsb6" event={"ID":"33a5960a-9113-480d-ace4-669cf0c35e34","Type":"ContainerStarted","Data":"2746b756355dff2ed1c4cbcbfccf115c20800844993f773897b5accaeb91e936"} Feb 19 21:18:37 crc kubenswrapper[4886]: I0219 21:18:37.508734 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-2kztz"] Feb 19 21:18:37 crc kubenswrapper[4886]: I0219 21:18:37.516529 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-2kztz"] Feb 19 21:18:37 crc kubenswrapper[4886]: I0219 21:18:37.526992 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6dd97696d9-fc9t4" podStartSLOduration=13.526975282 podStartE2EDuration="13.526975282s" podCreationTimestamp="2026-02-19 21:18:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:18:37.494516031 +0000 UTC m=+1148.122359101" watchObservedRunningTime="2026-02-19 21:18:37.526975282 +0000 UTC m=+1148.154818332" Feb 19 21:18:37 crc kubenswrapper[4886]: I0219 21:18:37.537246 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-qlf2w" podStartSLOduration=3.633544137 podStartE2EDuration="21.537228558s" podCreationTimestamp="2026-02-19 21:18:16 +0000 UTC" firstStartedPulling="2026-02-19 21:18:17.69856452 +0000 UTC m=+1128.326407570" lastFinishedPulling="2026-02-19 21:18:35.602248941 +0000 UTC m=+1146.230091991" observedRunningTime="2026-02-19 21:18:37.506648645 +0000 UTC m=+1148.134491695" watchObservedRunningTime="2026-02-19 21:18:37.537228558 +0000 UTC m=+1148.165071608" Feb 19 21:18:37 crc kubenswrapper[4886]: I0219 21:18:37.879141 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 21:18:37 crc kubenswrapper[4886]: W0219 21:18:37.882158 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf51e97c2_10ea_498b_8239_f30033c0069a.slice/crio-0e5147700ff0a1e7bc208d23c2902cc119c7ddcab152dbf3fc96e06212542119 WatchSource:0}: Error finding container 0e5147700ff0a1e7bc208d23c2902cc119c7ddcab152dbf3fc96e06212542119: Status 404 returned error can't find the container with id 0e5147700ff0a1e7bc208d23c2902cc119c7ddcab152dbf3fc96e06212542119 Feb 19 21:18:38 crc kubenswrapper[4886]: I0219 21:18:38.264738 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 21:18:38 crc kubenswrapper[4886]: I0219 21:18:38.473769 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f51e97c2-10ea-498b-8239-f30033c0069a","Type":"ContainerStarted","Data":"0e5147700ff0a1e7bc208d23c2902cc119c7ddcab152dbf3fc96e06212542119"} Feb 19 21:18:38 crc kubenswrapper[4886]: I0219 21:18:38.476579 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-j88cw" event={"ID":"8902da83-187f-4cc3-8486-3d7ed6625bdb","Type":"ContainerStarted","Data":"1a821e0304e6b723e34aaef22796114d14695d375f877481f51c6d748e8742db"} Feb 19 21:18:38 crc kubenswrapper[4886]: I0219 21:18:38.503897 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-j88cw" podStartSLOduration=4.5740046119999995 podStartE2EDuration="22.503883067s" podCreationTimestamp="2026-02-19 21:18:16 +0000 UTC" firstStartedPulling="2026-02-19 21:18:17.454169745 +0000 UTC m=+1128.082012795" lastFinishedPulling="2026-02-19 21:18:35.3840482 +0000 UTC m=+1146.011891250" observedRunningTime="2026-02-19 21:18:38.497348413 +0000 UTC m=+1149.125191463" watchObservedRunningTime="2026-02-19 21:18:38.503883067 +0000 UTC m=+1149.131726117" Feb 19 21:18:38 crc kubenswrapper[4886]: I0219 21:18:38.612462 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68233b32-e4f1-46e6-ad6e-e6c271e2e02f" path="/var/lib/kubelet/pods/68233b32-e4f1-46e6-ad6e-e6c271e2e02f/volumes" Feb 19 21:18:38 crc kubenswrapper[4886]: I0219 21:18:38.612855 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6862d7c6-a605-46f1-a2dc-9cef73d11b4c" path="/var/lib/kubelet/pods/6862d7c6-a605-46f1-a2dc-9cef73d11b4c/volumes" Feb 19 21:18:39 crc kubenswrapper[4886]: I0219 21:18:39.485804 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"16796d4f-3225-41e3-be54-1058492aa1ee","Type":"ContainerStarted","Data":"c45315fbc70189ba6ce81a359eaa887b6e8bb4d2a9eda13ae4b649b69dac51d2"} Feb 19 21:18:41 crc kubenswrapper[4886]: I0219 21:18:41.780304 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-j88cw" Feb 19 21:18:42 crc kubenswrapper[4886]: I0219 21:18:42.196815 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-qlf2w" Feb 19 21:18:42 crc kubenswrapper[4886]: I0219 21:18:42.262460 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-j88cw"] Feb 19 21:18:42 crc kubenswrapper[4886]: I0219 21:18:42.522728 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-j88cw" podUID="8902da83-187f-4cc3-8486-3d7ed6625bdb" containerName="dnsmasq-dns" containerID="cri-o://1a821e0304e6b723e34aaef22796114d14695d375f877481f51c6d748e8742db" gracePeriod=10 Feb 19 21:18:42 crc kubenswrapper[4886]: I0219 21:18:42.523748 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-j88cw" Feb 19 21:18:42 crc kubenswrapper[4886]: E0219 21:18:42.748822 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8902da83_187f_4cc3_8486_3d7ed6625bdb.slice/crio-1a821e0304e6b723e34aaef22796114d14695d375f877481f51c6d748e8742db.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8902da83_187f_4cc3_8486_3d7ed6625bdb.slice/crio-conmon-1a821e0304e6b723e34aaef22796114d14695d375f877481f51c6d748e8742db.scope\": RecentStats: unable to find data in memory cache]" Feb 19 21:18:43 crc kubenswrapper[4886]: I0219 21:18:43.544326 4886 generic.go:334] "Generic (PLEG): container finished" podID="8902da83-187f-4cc3-8486-3d7ed6625bdb" containerID="1a821e0304e6b723e34aaef22796114d14695d375f877481f51c6d748e8742db" exitCode=0 Feb 19 21:18:43 crc kubenswrapper[4886]: I0219 21:18:43.544382 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-j88cw" event={"ID":"8902da83-187f-4cc3-8486-3d7ed6625bdb","Type":"ContainerDied","Data":"1a821e0304e6b723e34aaef22796114d14695d375f877481f51c6d748e8742db"} Feb 19 21:18:44 crc kubenswrapper[4886]: I0219 21:18:44.853394 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:44 crc kubenswrapper[4886]: I0219 21:18:44.853734 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:44 crc kubenswrapper[4886]: I0219 21:18:44.859188 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:45 crc kubenswrapper[4886]: I0219 21:18:45.573491 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6dd97696d9-fc9t4" Feb 19 21:18:45 crc kubenswrapper[4886]: I0219 21:18:45.645576 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-79dd89cd97-kwpj9"] Feb 19 21:18:46 crc kubenswrapper[4886]: I0219 21:18:46.909379 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-j88cw" Feb 19 21:18:46 crc kubenswrapper[4886]: I0219 21:18:46.946620 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8902da83-187f-4cc3-8486-3d7ed6625bdb-dns-svc\") pod \"8902da83-187f-4cc3-8486-3d7ed6625bdb\" (UID: \"8902da83-187f-4cc3-8486-3d7ed6625bdb\") " Feb 19 21:18:46 crc kubenswrapper[4886]: I0219 21:18:46.946763 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnxgm\" (UniqueName: \"kubernetes.io/projected/8902da83-187f-4cc3-8486-3d7ed6625bdb-kube-api-access-qnxgm\") pod \"8902da83-187f-4cc3-8486-3d7ed6625bdb\" (UID: \"8902da83-187f-4cc3-8486-3d7ed6625bdb\") " Feb 19 21:18:46 crc kubenswrapper[4886]: I0219 21:18:46.946895 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8902da83-187f-4cc3-8486-3d7ed6625bdb-config\") pod \"8902da83-187f-4cc3-8486-3d7ed6625bdb\" (UID: \"8902da83-187f-4cc3-8486-3d7ed6625bdb\") " Feb 19 21:18:46 crc kubenswrapper[4886]: I0219 21:18:46.952336 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8902da83-187f-4cc3-8486-3d7ed6625bdb-kube-api-access-qnxgm" (OuterVolumeSpecName: "kube-api-access-qnxgm") pod "8902da83-187f-4cc3-8486-3d7ed6625bdb" (UID: "8902da83-187f-4cc3-8486-3d7ed6625bdb"). InnerVolumeSpecName "kube-api-access-qnxgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:18:47 crc kubenswrapper[4886]: I0219 21:18:47.008509 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8902da83-187f-4cc3-8486-3d7ed6625bdb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8902da83-187f-4cc3-8486-3d7ed6625bdb" (UID: "8902da83-187f-4cc3-8486-3d7ed6625bdb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:18:47 crc kubenswrapper[4886]: I0219 21:18:47.014926 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8902da83-187f-4cc3-8486-3d7ed6625bdb-config" (OuterVolumeSpecName: "config") pod "8902da83-187f-4cc3-8486-3d7ed6625bdb" (UID: "8902da83-187f-4cc3-8486-3d7ed6625bdb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:18:47 crc kubenswrapper[4886]: I0219 21:18:47.050706 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnxgm\" (UniqueName: \"kubernetes.io/projected/8902da83-187f-4cc3-8486-3d7ed6625bdb-kube-api-access-qnxgm\") on node \"crc\" DevicePath \"\"" Feb 19 21:18:47 crc kubenswrapper[4886]: I0219 21:18:47.050745 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8902da83-187f-4cc3-8486-3d7ed6625bdb-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:18:47 crc kubenswrapper[4886]: I0219 21:18:47.050757 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8902da83-187f-4cc3-8486-3d7ed6625bdb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 21:18:47 crc kubenswrapper[4886]: I0219 21:18:47.588598 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-j88cw" event={"ID":"8902da83-187f-4cc3-8486-3d7ed6625bdb","Type":"ContainerDied","Data":"e89819363709fab0722953ee2ce0e209e45d78d9afe23a63b2375b54167db79f"} Feb 19 21:18:47 crc kubenswrapper[4886]: I0219 21:18:47.588935 4886 scope.go:117] "RemoveContainer" containerID="1a821e0304e6b723e34aaef22796114d14695d375f877481f51c6d748e8742db" Feb 19 21:18:47 crc kubenswrapper[4886]: I0219 21:18:47.588662 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-j88cw" Feb 19 21:18:47 crc kubenswrapper[4886]: I0219 21:18:47.644745 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-j88cw"] Feb 19 21:18:47 crc kubenswrapper[4886]: I0219 21:18:47.658391 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-j88cw"] Feb 19 21:18:47 crc kubenswrapper[4886]: I0219 21:18:47.802572 4886 scope.go:117] "RemoveContainer" containerID="01fd17a56e880e08461af0f922c032ebae5911d67a0cfcbea7cd7f63acecc3ec" Feb 19 21:18:48 crc kubenswrapper[4886]: I0219 21:18:48.617766 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8902da83-187f-4cc3-8486-3d7ed6625bdb" path="/var/lib/kubelet/pods/8902da83-187f-4cc3-8486-3d7ed6625bdb/volumes" Feb 19 21:18:49 crc kubenswrapper[4886]: I0219 21:18:49.607510 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4zsb6" event={"ID":"33a5960a-9113-480d-ace4-669cf0c35e34","Type":"ContainerStarted","Data":"302d844f6a4eec56de401b5168d4d8f0c04ed4dbcbbc6bd87f468197355bee12"} Feb 19 21:18:49 crc kubenswrapper[4886]: I0219 21:18:49.609328 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-7877m" event={"ID":"20eb82ad-3fa3-48bb-bddd-f479d3f0ac68","Type":"ContainerStarted","Data":"d8d1b645f73b5211c3dd26eec1a525d79f726ab6aa60aae393fc36ab8643c277"} Feb 19 21:18:49 crc kubenswrapper[4886]: I0219 21:18:49.610924 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f51e97c2-10ea-498b-8239-f30033c0069a","Type":"ContainerStarted","Data":"c4e0a3903c67b8ba9521a8a3eb7abed41cffb91f6b3bbf0a088ad020b175b509"} Feb 19 21:18:49 crc kubenswrapper[4886]: I0219 21:18:49.646879 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-7877m" podStartSLOduration=15.058967484 podStartE2EDuration="25.646850579s" podCreationTimestamp="2026-02-19 21:18:24 +0000 UTC" firstStartedPulling="2026-02-19 21:18:36.56571024 +0000 UTC m=+1147.193553300" lastFinishedPulling="2026-02-19 21:18:47.153593345 +0000 UTC m=+1157.781436395" observedRunningTime="2026-02-19 21:18:49.640076239 +0000 UTC m=+1160.267919299" watchObservedRunningTime="2026-02-19 21:18:49.646850579 +0000 UTC m=+1160.274693659" Feb 19 21:18:50 crc kubenswrapper[4886]: I0219 21:18:50.634971 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"16796d4f-3225-41e3-be54-1058492aa1ee","Type":"ContainerStarted","Data":"cd04a50f0f32c6d4a8ee068cc0082a2c354762fc35dac3c8f865c04e91caeb02"} Feb 19 21:18:50 crc kubenswrapper[4886]: I0219 21:18:50.637633 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"0101f511-61b8-4c7c-ac46-5b0eaf73e3fe","Type":"ContainerStarted","Data":"9022726c956da4a5f676a530018fb3ca1422469392e52be9196bcf0780054072"} Feb 19 21:18:50 crc kubenswrapper[4886]: I0219 21:18:50.639936 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3","Type":"ContainerStarted","Data":"615fef6ef535c1860b8584392f092a9783e67890734ac4400dadf1278c1a1d86"} Feb 19 21:18:51 crc kubenswrapper[4886]: I0219 21:18:51.653347 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dz5ch" event={"ID":"270e230a-d5f1-40ff-968c-cd3a77504bf8","Type":"ContainerStarted","Data":"61d4b04dca19fd28d3b2a1e25aef17ce64f92acd4e83da921e2524453d52dc9f"} Feb 19 21:18:51 crc kubenswrapper[4886]: I0219 21:18:51.653809 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-dz5ch" Feb 19 21:18:51 crc kubenswrapper[4886]: I0219 21:18:51.655080 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"638a08ec-2f97-4b36-919f-9346af224a16","Type":"ContainerStarted","Data":"739e6580757c5e540a290dbd54c31797b2ccce6cc67cc0b6599a7df349d0575d"} Feb 19 21:18:51 crc kubenswrapper[4886]: I0219 21:18:51.657245 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c4270056-5929-46be-bced-090af7fb6761","Type":"ContainerStarted","Data":"ce7d01c8edadb597a63c0ac26ed52d10e15d6cdbd55a9bc4ef1349b5d4fcbdd1"} Feb 19 21:18:51 crc kubenswrapper[4886]: I0219 21:18:51.659307 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9b78f5c0-b665-4723-bddd-e6cccd0fca87","Type":"ContainerStarted","Data":"aca645c0e142a7b5a73ac3674dc61187f5c1702e57f8c676e126f2e579b59e7b"} Feb 19 21:18:51 crc kubenswrapper[4886]: I0219 21:18:51.661599 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"517c324e-c2e0-4775-83fd-c9b811305eb0","Type":"ContainerStarted","Data":"8580b0392770d2d79bff5e52569df9d70f9e320b7657aa1028609e9a43562f9f"} Feb 19 21:18:51 crc kubenswrapper[4886]: I0219 21:18:51.662091 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 21:18:51 crc kubenswrapper[4886]: I0219 21:18:51.667473 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f1ec4082-af5d-46ce-a7ca-88091e668a22","Type":"ContainerStarted","Data":"169100d07ea125113ae9e306e5b8e8dd80d7424de5a2e844aaf2cd8ac5d03566"} Feb 19 21:18:51 crc kubenswrapper[4886]: I0219 21:18:51.670325 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c","Type":"ContainerStarted","Data":"9bcc5bfcaf031b1fe92986e2c726892392d4f127265a62b4c38f7af660aaaa6a"} Feb 19 21:18:51 crc kubenswrapper[4886]: I0219 21:18:51.676164 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-dz5ch" podStartSLOduration=15.941816189 podStartE2EDuration="26.676146792s" podCreationTimestamp="2026-02-19 21:18:25 +0000 UTC" firstStartedPulling="2026-02-19 21:18:36.716183499 +0000 UTC m=+1147.344026549" lastFinishedPulling="2026-02-19 21:18:47.450514072 +0000 UTC m=+1158.078357152" observedRunningTime="2026-02-19 21:18:51.675169618 +0000 UTC m=+1162.303012668" watchObservedRunningTime="2026-02-19 21:18:51.676146792 +0000 UTC m=+1162.303989832" Feb 19 21:18:51 crc kubenswrapper[4886]: I0219 21:18:51.704342 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=16.154047619 podStartE2EDuration="28.704235724s" podCreationTimestamp="2026-02-19 21:18:23 +0000 UTC" firstStartedPulling="2026-02-19 21:18:36.565590327 +0000 UTC m=+1147.193433377" lastFinishedPulling="2026-02-19 21:18:49.115778442 +0000 UTC m=+1159.743621482" observedRunningTime="2026-02-19 21:18:51.698609753 +0000 UTC m=+1162.326452803" watchObservedRunningTime="2026-02-19 21:18:51.704235724 +0000 UTC m=+1162.332078784" Feb 19 21:18:51 crc kubenswrapper[4886]: I0219 21:18:51.794134 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=20.622816909 podStartE2EDuration="31.794117419s" podCreationTimestamp="2026-02-19 21:18:20 +0000 UTC" firstStartedPulling="2026-02-19 21:18:35.982805268 +0000 UTC m=+1146.610648318" lastFinishedPulling="2026-02-19 21:18:47.154105778 +0000 UTC m=+1157.781948828" observedRunningTime="2026-02-19 21:18:51.782302174 +0000 UTC m=+1162.410145214" watchObservedRunningTime="2026-02-19 21:18:51.794117419 +0000 UTC m=+1162.421960469" Feb 19 21:18:51 crc kubenswrapper[4886]: I0219 21:18:51.801407 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-j88cw" podUID="8902da83-187f-4cc3-8486-3d7ed6625bdb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Feb 19 21:18:52 crc kubenswrapper[4886]: I0219 21:18:52.687478 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8cccf7aa-d200-47d7-96d4-cf6b048f966e","Type":"ContainerStarted","Data":"67c38c89f5bb22932e22d47703fc0fe71f7a8b826a0f8ee230b5ad9693ac5bcc"} Feb 19 21:18:52 crc kubenswrapper[4886]: I0219 21:18:52.690950 4886 generic.go:334] "Generic (PLEG): container finished" podID="33a5960a-9113-480d-ace4-669cf0c35e34" containerID="302d844f6a4eec56de401b5168d4d8f0c04ed4dbcbbc6bd87f468197355bee12" exitCode=0 Feb 19 21:18:52 crc kubenswrapper[4886]: I0219 21:18:52.692573 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4zsb6" event={"ID":"33a5960a-9113-480d-ace4-669cf0c35e34","Type":"ContainerDied","Data":"302d844f6a4eec56de401b5168d4d8f0c04ed4dbcbbc6bd87f468197355bee12"} Feb 19 21:18:53 crc kubenswrapper[4886]: I0219 21:18:53.700570 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"16796d4f-3225-41e3-be54-1058492aa1ee","Type":"ContainerStarted","Data":"11d35cd5d76cde19ff0466cebb861011df000d2471f48be48c0bbc7552e0e8f9"} Feb 19 21:18:53 crc kubenswrapper[4886]: I0219 21:18:53.703570 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4zsb6" event={"ID":"33a5960a-9113-480d-ace4-669cf0c35e34","Type":"ContainerStarted","Data":"1f21ef20d3793f824bb8fbb547949057721b95acd9e04887d08e0ab9de9761dc"} Feb 19 21:18:53 crc kubenswrapper[4886]: I0219 21:18:53.703602 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4zsb6" event={"ID":"33a5960a-9113-480d-ace4-669cf0c35e34","Type":"ContainerStarted","Data":"88e1f4dc66ba85baba1a176050f737fbe2619f4d6029b6dda0dd478e5ff22675"} Feb 19 21:18:53 crc kubenswrapper[4886]: I0219 21:18:53.703808 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-4zsb6" Feb 19 21:18:53 crc kubenswrapper[4886]: I0219 21:18:53.703852 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-4zsb6" Feb 19 21:18:53 crc kubenswrapper[4886]: I0219 21:18:53.708642 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f51e97c2-10ea-498b-8239-f30033c0069a","Type":"ContainerStarted","Data":"03bd4bbe56bb9c2b798a952443d547d042ca60c33db1df9dad574da57b7f5c18"} Feb 19 21:18:53 crc kubenswrapper[4886]: I0219 21:18:53.732077 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=11.249260978 podStartE2EDuration="24.732059831s" podCreationTimestamp="2026-02-19 21:18:29 +0000 UTC" firstStartedPulling="2026-02-19 21:18:39.449025257 +0000 UTC m=+1150.076868327" lastFinishedPulling="2026-02-19 21:18:52.93182414 +0000 UTC m=+1163.559667180" observedRunningTime="2026-02-19 21:18:53.730575284 +0000 UTC m=+1164.358418334" watchObservedRunningTime="2026-02-19 21:18:53.732059831 +0000 UTC m=+1164.359902881" Feb 19 21:18:53 crc kubenswrapper[4886]: I0219 21:18:53.759839 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=13.70093124 podStartE2EDuration="28.759816765s" podCreationTimestamp="2026-02-19 21:18:25 +0000 UTC" firstStartedPulling="2026-02-19 21:18:37.884950405 +0000 UTC m=+1148.512793445" lastFinishedPulling="2026-02-19 21:18:52.94383592 +0000 UTC m=+1163.571678970" observedRunningTime="2026-02-19 21:18:53.748919752 +0000 UTC m=+1164.376762802" watchObservedRunningTime="2026-02-19 21:18:53.759816765 +0000 UTC m=+1164.387659815" Feb 19 21:18:53 crc kubenswrapper[4886]: I0219 21:18:53.773276 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-4zsb6" podStartSLOduration=18.885584616 podStartE2EDuration="28.77323487s" podCreationTimestamp="2026-02-19 21:18:25 +0000 UTC" firstStartedPulling="2026-02-19 21:18:37.266096765 +0000 UTC m=+1147.893939815" lastFinishedPulling="2026-02-19 21:18:47.153746979 +0000 UTC m=+1157.781590069" observedRunningTime="2026-02-19 21:18:53.767202959 +0000 UTC m=+1164.395046019" watchObservedRunningTime="2026-02-19 21:18:53.77323487 +0000 UTC m=+1164.401077930" Feb 19 21:18:54 crc kubenswrapper[4886]: I0219 21:18:54.092650 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:54 crc kubenswrapper[4886]: I0219 21:18:54.134577 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.067006 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.104082 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.130652 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.149780 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.412429 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-2m9mv"] Feb 19 21:18:55 crc kubenswrapper[4886]: E0219 21:18:55.413064 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8902da83-187f-4cc3-8486-3d7ed6625bdb" containerName="dnsmasq-dns" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.413079 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8902da83-187f-4cc3-8486-3d7ed6625bdb" containerName="dnsmasq-dns" Feb 19 21:18:55 crc kubenswrapper[4886]: E0219 21:18:55.413088 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8902da83-187f-4cc3-8486-3d7ed6625bdb" containerName="init" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.413095 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8902da83-187f-4cc3-8486-3d7ed6625bdb" containerName="init" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.413299 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="8902da83-187f-4cc3-8486-3d7ed6625bdb" containerName="dnsmasq-dns" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.414240 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-2m9mv" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.417673 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.427206 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-2m9mv"] Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.468161 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-cd2cs"] Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.469517 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-cd2cs" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.473602 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.480645 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-cd2cs"] Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.543349 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.578792 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4392312-aceb-48b6-947c-fe12ac84dcc4-config\") pod \"dnsmasq-dns-7fd796d7df-2m9mv\" (UID: \"c4392312-aceb-48b6-947c-fe12ac84dcc4\") " pod="openstack/dnsmasq-dns-7fd796d7df-2m9mv" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.578842 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe04f760-9b9f-4e99-80ff-467c2480540a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-cd2cs\" (UID: \"fe04f760-9b9f-4e99-80ff-467c2480540a\") " pod="openstack/ovn-controller-metrics-cd2cs" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.579013 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fe04f760-9b9f-4e99-80ff-467c2480540a-ovs-rundir\") pod \"ovn-controller-metrics-cd2cs\" (UID: \"fe04f760-9b9f-4e99-80ff-467c2480540a\") " pod="openstack/ovn-controller-metrics-cd2cs" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.579048 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4392312-aceb-48b6-947c-fe12ac84dcc4-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-2m9mv\" (UID: \"c4392312-aceb-48b6-947c-fe12ac84dcc4\") " pod="openstack/dnsmasq-dns-7fd796d7df-2m9mv" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.579096 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkhsp\" (UniqueName: \"kubernetes.io/projected/fe04f760-9b9f-4e99-80ff-467c2480540a-kube-api-access-fkhsp\") pod \"ovn-controller-metrics-cd2cs\" (UID: \"fe04f760-9b9f-4e99-80ff-467c2480540a\") " pod="openstack/ovn-controller-metrics-cd2cs" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.579144 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe04f760-9b9f-4e99-80ff-467c2480540a-combined-ca-bundle\") pod \"ovn-controller-metrics-cd2cs\" (UID: \"fe04f760-9b9f-4e99-80ff-467c2480540a\") " pod="openstack/ovn-controller-metrics-cd2cs" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.579217 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4392312-aceb-48b6-947c-fe12ac84dcc4-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-2m9mv\" (UID: \"c4392312-aceb-48b6-947c-fe12ac84dcc4\") " pod="openstack/dnsmasq-dns-7fd796d7df-2m9mv" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.579246 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc945\" (UniqueName: \"kubernetes.io/projected/c4392312-aceb-48b6-947c-fe12ac84dcc4-kube-api-access-gc945\") pod \"dnsmasq-dns-7fd796d7df-2m9mv\" (UID: \"c4392312-aceb-48b6-947c-fe12ac84dcc4\") " pod="openstack/dnsmasq-dns-7fd796d7df-2m9mv" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.579371 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe04f760-9b9f-4e99-80ff-467c2480540a-config\") pod \"ovn-controller-metrics-cd2cs\" (UID: \"fe04f760-9b9f-4e99-80ff-467c2480540a\") " pod="openstack/ovn-controller-metrics-cd2cs" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.579401 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fe04f760-9b9f-4e99-80ff-467c2480540a-ovn-rundir\") pod \"ovn-controller-metrics-cd2cs\" (UID: \"fe04f760-9b9f-4e99-80ff-467c2480540a\") " pod="openstack/ovn-controller-metrics-cd2cs" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.590604 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.650342 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-2m9mv"] Feb 19 21:18:55 crc kubenswrapper[4886]: E0219 21:18:55.651060 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-gc945 ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7fd796d7df-2m9mv" podUID="c4392312-aceb-48b6-947c-fe12ac84dcc4" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.675789 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-r6595"] Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.677940 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-r6595" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.682348 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.683304 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fe04f760-9b9f-4e99-80ff-467c2480540a-ovs-rundir\") pod \"ovn-controller-metrics-cd2cs\" (UID: \"fe04f760-9b9f-4e99-80ff-467c2480540a\") " pod="openstack/ovn-controller-metrics-cd2cs" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.683338 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4392312-aceb-48b6-947c-fe12ac84dcc4-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-2m9mv\" (UID: \"c4392312-aceb-48b6-947c-fe12ac84dcc4\") " pod="openstack/dnsmasq-dns-7fd796d7df-2m9mv" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.683375 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkhsp\" (UniqueName: \"kubernetes.io/projected/fe04f760-9b9f-4e99-80ff-467c2480540a-kube-api-access-fkhsp\") pod \"ovn-controller-metrics-cd2cs\" (UID: \"fe04f760-9b9f-4e99-80ff-467c2480540a\") " pod="openstack/ovn-controller-metrics-cd2cs" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.683413 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe04f760-9b9f-4e99-80ff-467c2480540a-combined-ca-bundle\") pod \"ovn-controller-metrics-cd2cs\" (UID: \"fe04f760-9b9f-4e99-80ff-467c2480540a\") " pod="openstack/ovn-controller-metrics-cd2cs" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.683448 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4392312-aceb-48b6-947c-fe12ac84dcc4-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-2m9mv\" (UID: \"c4392312-aceb-48b6-947c-fe12ac84dcc4\") " pod="openstack/dnsmasq-dns-7fd796d7df-2m9mv" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.683470 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc945\" (UniqueName: \"kubernetes.io/projected/c4392312-aceb-48b6-947c-fe12ac84dcc4-kube-api-access-gc945\") pod \"dnsmasq-dns-7fd796d7df-2m9mv\" (UID: \"c4392312-aceb-48b6-947c-fe12ac84dcc4\") " pod="openstack/dnsmasq-dns-7fd796d7df-2m9mv" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.683564 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe04f760-9b9f-4e99-80ff-467c2480540a-config\") pod \"ovn-controller-metrics-cd2cs\" (UID: \"fe04f760-9b9f-4e99-80ff-467c2480540a\") " pod="openstack/ovn-controller-metrics-cd2cs" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.683595 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fe04f760-9b9f-4e99-80ff-467c2480540a-ovn-rundir\") pod \"ovn-controller-metrics-cd2cs\" (UID: \"fe04f760-9b9f-4e99-80ff-467c2480540a\") " pod="openstack/ovn-controller-metrics-cd2cs" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.683651 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4392312-aceb-48b6-947c-fe12ac84dcc4-config\") pod \"dnsmasq-dns-7fd796d7df-2m9mv\" (UID: \"c4392312-aceb-48b6-947c-fe12ac84dcc4\") " pod="openstack/dnsmasq-dns-7fd796d7df-2m9mv" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.683681 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe04f760-9b9f-4e99-80ff-467c2480540a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-cd2cs\" (UID: \"fe04f760-9b9f-4e99-80ff-467c2480540a\") " pod="openstack/ovn-controller-metrics-cd2cs" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.684535 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fe04f760-9b9f-4e99-80ff-467c2480540a-ovs-rundir\") pod \"ovn-controller-metrics-cd2cs\" (UID: \"fe04f760-9b9f-4e99-80ff-467c2480540a\") " pod="openstack/ovn-controller-metrics-cd2cs" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.685703 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4392312-aceb-48b6-947c-fe12ac84dcc4-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-2m9mv\" (UID: \"c4392312-aceb-48b6-947c-fe12ac84dcc4\") " pod="openstack/dnsmasq-dns-7fd796d7df-2m9mv" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.686161 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe04f760-9b9f-4e99-80ff-467c2480540a-config\") pod \"ovn-controller-metrics-cd2cs\" (UID: \"fe04f760-9b9f-4e99-80ff-467c2480540a\") " pod="openstack/ovn-controller-metrics-cd2cs" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.686333 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4392312-aceb-48b6-947c-fe12ac84dcc4-config\") pod \"dnsmasq-dns-7fd796d7df-2m9mv\" (UID: \"c4392312-aceb-48b6-947c-fe12ac84dcc4\") " pod="openstack/dnsmasq-dns-7fd796d7df-2m9mv" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.686391 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fe04f760-9b9f-4e99-80ff-467c2480540a-ovn-rundir\") pod \"ovn-controller-metrics-cd2cs\" (UID: \"fe04f760-9b9f-4e99-80ff-467c2480540a\") " pod="openstack/ovn-controller-metrics-cd2cs" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.687194 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4392312-aceb-48b6-947c-fe12ac84dcc4-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-2m9mv\" (UID: \"c4392312-aceb-48b6-947c-fe12ac84dcc4\") " pod="openstack/dnsmasq-dns-7fd796d7df-2m9mv" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.693403 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe04f760-9b9f-4e99-80ff-467c2480540a-combined-ca-bundle\") pod \"ovn-controller-metrics-cd2cs\" (UID: \"fe04f760-9b9f-4e99-80ff-467c2480540a\") " pod="openstack/ovn-controller-metrics-cd2cs" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.699806 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe04f760-9b9f-4e99-80ff-467c2480540a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-cd2cs\" (UID: \"fe04f760-9b9f-4e99-80ff-467c2480540a\") " pod="openstack/ovn-controller-metrics-cd2cs" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.705631 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc945\" (UniqueName: \"kubernetes.io/projected/c4392312-aceb-48b6-947c-fe12ac84dcc4-kube-api-access-gc945\") pod \"dnsmasq-dns-7fd796d7df-2m9mv\" (UID: \"c4392312-aceb-48b6-947c-fe12ac84dcc4\") " pod="openstack/dnsmasq-dns-7fd796d7df-2m9mv" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.706290 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-r6595"] Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.707495 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkhsp\" (UniqueName: \"kubernetes.io/projected/fe04f760-9b9f-4e99-80ff-467c2480540a-kube-api-access-fkhsp\") pod \"ovn-controller-metrics-cd2cs\" (UID: \"fe04f760-9b9f-4e99-80ff-467c2480540a\") " pod="openstack/ovn-controller-metrics-cd2cs" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.785121 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a923266b-c71d-4ed9-95b1-8e04615a380b-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-r6595\" (UID: \"a923266b-c71d-4ed9-95b1-8e04615a380b\") " pod="openstack/dnsmasq-dns-86db49b7ff-r6595" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.787462 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a923266b-c71d-4ed9-95b1-8e04615a380b-config\") pod \"dnsmasq-dns-86db49b7ff-r6595\" (UID: \"a923266b-c71d-4ed9-95b1-8e04615a380b\") " pod="openstack/dnsmasq-dns-86db49b7ff-r6595" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.787583 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a923266b-c71d-4ed9-95b1-8e04615a380b-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-r6595\" (UID: \"a923266b-c71d-4ed9-95b1-8e04615a380b\") " pod="openstack/dnsmasq-dns-86db49b7ff-r6595" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.787615 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a923266b-c71d-4ed9-95b1-8e04615a380b-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-r6595\" (UID: \"a923266b-c71d-4ed9-95b1-8e04615a380b\") " pod="openstack/dnsmasq-dns-86db49b7ff-r6595" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.787636 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8msz\" (UniqueName: \"kubernetes.io/projected/a923266b-c71d-4ed9-95b1-8e04615a380b-kube-api-access-g8msz\") pod \"dnsmasq-dns-86db49b7ff-r6595\" (UID: \"a923266b-c71d-4ed9-95b1-8e04615a380b\") " pod="openstack/dnsmasq-dns-86db49b7ff-r6595" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.802991 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-cd2cs" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.840420 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.842185 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.847281 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.850044 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.850338 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.850575 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-ngg42" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.856582 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.875908 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.877611 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.893545 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a923266b-c71d-4ed9-95b1-8e04615a380b-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-r6595\" (UID: \"a923266b-c71d-4ed9-95b1-8e04615a380b\") " pod="openstack/dnsmasq-dns-86db49b7ff-r6595" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.893598 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a923266b-c71d-4ed9-95b1-8e04615a380b-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-r6595\" (UID: \"a923266b-c71d-4ed9-95b1-8e04615a380b\") " pod="openstack/dnsmasq-dns-86db49b7ff-r6595" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.893621 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8msz\" (UniqueName: \"kubernetes.io/projected/a923266b-c71d-4ed9-95b1-8e04615a380b-kube-api-access-g8msz\") pod \"dnsmasq-dns-86db49b7ff-r6595\" (UID: \"a923266b-c71d-4ed9-95b1-8e04615a380b\") " pod="openstack/dnsmasq-dns-86db49b7ff-r6595" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.893710 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a923266b-c71d-4ed9-95b1-8e04615a380b-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-r6595\" (UID: \"a923266b-c71d-4ed9-95b1-8e04615a380b\") " pod="openstack/dnsmasq-dns-86db49b7ff-r6595" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.893727 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a923266b-c71d-4ed9-95b1-8e04615a380b-config\") pod \"dnsmasq-dns-86db49b7ff-r6595\" (UID: \"a923266b-c71d-4ed9-95b1-8e04615a380b\") " pod="openstack/dnsmasq-dns-86db49b7ff-r6595" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.894533 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a923266b-c71d-4ed9-95b1-8e04615a380b-config\") pod \"dnsmasq-dns-86db49b7ff-r6595\" (UID: \"a923266b-c71d-4ed9-95b1-8e04615a380b\") " pod="openstack/dnsmasq-dns-86db49b7ff-r6595" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.895012 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a923266b-c71d-4ed9-95b1-8e04615a380b-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-r6595\" (UID: \"a923266b-c71d-4ed9-95b1-8e04615a380b\") " pod="openstack/dnsmasq-dns-86db49b7ff-r6595" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.895494 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a923266b-c71d-4ed9-95b1-8e04615a380b-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-r6595\" (UID: \"a923266b-c71d-4ed9-95b1-8e04615a380b\") " pod="openstack/dnsmasq-dns-86db49b7ff-r6595" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.896208 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a923266b-c71d-4ed9-95b1-8e04615a380b-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-r6595\" (UID: \"a923266b-c71d-4ed9-95b1-8e04615a380b\") " pod="openstack/dnsmasq-dns-86db49b7ff-r6595" Feb 19 21:18:55 crc kubenswrapper[4886]: I0219 21:18:55.926082 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8msz\" (UniqueName: \"kubernetes.io/projected/a923266b-c71d-4ed9-95b1-8e04615a380b-kube-api-access-g8msz\") pod \"dnsmasq-dns-86db49b7ff-r6595\" (UID: \"a923266b-c71d-4ed9-95b1-8e04615a380b\") " pod="openstack/dnsmasq-dns-86db49b7ff-r6595" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:55.999371 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b-scripts\") pod \"ovn-northd-0\" (UID: \"ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b\") " pod="openstack/ovn-northd-0" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:55.999427 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b\") " pod="openstack/ovn-northd-0" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:55.999455 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b\") " pod="openstack/ovn-northd-0" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:55.999569 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b\") " pod="openstack/ovn-northd-0" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:55.999615 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b\") " pod="openstack/ovn-northd-0" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:55.999637 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b-config\") pod \"ovn-northd-0\" (UID: \"ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b\") " pod="openstack/ovn-northd-0" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:55.999658 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfdzh\" (UniqueName: \"kubernetes.io/projected/ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b-kube-api-access-mfdzh\") pod \"ovn-northd-0\" (UID: \"ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b\") " pod="openstack/ovn-northd-0" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.083297 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-r6595" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.106603 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b-scripts\") pod \"ovn-northd-0\" (UID: \"ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b\") " pod="openstack/ovn-northd-0" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.106656 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b\") " pod="openstack/ovn-northd-0" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.106685 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b\") " pod="openstack/ovn-northd-0" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.106778 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b\") " pod="openstack/ovn-northd-0" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.106811 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b\") " pod="openstack/ovn-northd-0" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.106833 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b-config\") pod \"ovn-northd-0\" (UID: \"ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b\") " pod="openstack/ovn-northd-0" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.106853 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfdzh\" (UniqueName: \"kubernetes.io/projected/ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b-kube-api-access-mfdzh\") pod \"ovn-northd-0\" (UID: \"ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b\") " pod="openstack/ovn-northd-0" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.107928 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b-scripts\") pod \"ovn-northd-0\" (UID: \"ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b\") " pod="openstack/ovn-northd-0" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.110482 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b\") " pod="openstack/ovn-northd-0" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.112007 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b-config\") pod \"ovn-northd-0\" (UID: \"ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b\") " pod="openstack/ovn-northd-0" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.116625 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b\") " pod="openstack/ovn-northd-0" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.117362 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b\") " pod="openstack/ovn-northd-0" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.132722 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b\") " pod="openstack/ovn-northd-0" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.167137 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfdzh\" (UniqueName: \"kubernetes.io/projected/ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b-kube-api-access-mfdzh\") pod \"ovn-northd-0\" (UID: \"ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b\") " pod="openstack/ovn-northd-0" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.243023 4886 generic.go:334] "Generic (PLEG): container finished" podID="aeb6523b-7fed-4c9a-87c2-b531f22c9a1c" containerID="9bcc5bfcaf031b1fe92986e2c726892392d4f127265a62b4c38f7af660aaaa6a" exitCode=0 Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.243143 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c","Type":"ContainerDied","Data":"9bcc5bfcaf031b1fe92986e2c726892392d4f127265a62b4c38f7af660aaaa6a"} Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.248544 4886 generic.go:334] "Generic (PLEG): container finished" podID="a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3" containerID="615fef6ef535c1860b8584392f092a9783e67890734ac4400dadf1278c1a1d86" exitCode=0 Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.248647 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-2m9mv" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.249104 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3","Type":"ContainerDied","Data":"615fef6ef535c1860b8584392f092a9783e67890734ac4400dadf1278c1a1d86"} Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.281873 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-2m9mv" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.283213 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.414529 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4392312-aceb-48b6-947c-fe12ac84dcc4-dns-svc\") pod \"c4392312-aceb-48b6-947c-fe12ac84dcc4\" (UID: \"c4392312-aceb-48b6-947c-fe12ac84dcc4\") " Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.415002 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc945\" (UniqueName: \"kubernetes.io/projected/c4392312-aceb-48b6-947c-fe12ac84dcc4-kube-api-access-gc945\") pod \"c4392312-aceb-48b6-947c-fe12ac84dcc4\" (UID: \"c4392312-aceb-48b6-947c-fe12ac84dcc4\") " Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.415063 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4392312-aceb-48b6-947c-fe12ac84dcc4-config\") pod \"c4392312-aceb-48b6-947c-fe12ac84dcc4\" (UID: \"c4392312-aceb-48b6-947c-fe12ac84dcc4\") " Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.415108 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4392312-aceb-48b6-947c-fe12ac84dcc4-ovsdbserver-nb\") pod \"c4392312-aceb-48b6-947c-fe12ac84dcc4\" (UID: \"c4392312-aceb-48b6-947c-fe12ac84dcc4\") " Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.415304 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4392312-aceb-48b6-947c-fe12ac84dcc4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c4392312-aceb-48b6-947c-fe12ac84dcc4" (UID: "c4392312-aceb-48b6-947c-fe12ac84dcc4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.415751 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4392312-aceb-48b6-947c-fe12ac84dcc4-config" (OuterVolumeSpecName: "config") pod "c4392312-aceb-48b6-947c-fe12ac84dcc4" (UID: "c4392312-aceb-48b6-947c-fe12ac84dcc4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.415895 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4392312-aceb-48b6-947c-fe12ac84dcc4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c4392312-aceb-48b6-947c-fe12ac84dcc4" (UID: "c4392312-aceb-48b6-947c-fe12ac84dcc4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.419309 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4392312-aceb-48b6-947c-fe12ac84dcc4-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.419557 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4392312-aceb-48b6-947c-fe12ac84dcc4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.419576 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4392312-aceb-48b6-947c-fe12ac84dcc4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.426390 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4392312-aceb-48b6-947c-fe12ac84dcc4-kube-api-access-gc945" (OuterVolumeSpecName: "kube-api-access-gc945") pod "c4392312-aceb-48b6-947c-fe12ac84dcc4" (UID: "c4392312-aceb-48b6-947c-fe12ac84dcc4"). InnerVolumeSpecName "kube-api-access-gc945". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.520898 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc945\" (UniqueName: \"kubernetes.io/projected/c4392312-aceb-48b6-947c-fe12ac84dcc4-kube-api-access-gc945\") on node \"crc\" DevicePath \"\"" Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.546381 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-cd2cs"] Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.741765 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-r6595"] Feb 19 21:18:56 crc kubenswrapper[4886]: W0219 21:18:56.764079 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda923266b_c71d_4ed9_95b1_8e04615a380b.slice/crio-5dd86ada0a0a75e7ea735d0167334f496bfff5e2efd57c542e2d199b05a8f4b8 WatchSource:0}: Error finding container 5dd86ada0a0a75e7ea735d0167334f496bfff5e2efd57c542e2d199b05a8f4b8: Status 404 returned error can't find the container with id 5dd86ada0a0a75e7ea735d0167334f496bfff5e2efd57c542e2d199b05a8f4b8 Feb 19 21:18:56 crc kubenswrapper[4886]: I0219 21:18:56.861656 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 21:18:56 crc kubenswrapper[4886]: W0219 21:18:56.869954 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac6e9fa9_1c1c_4625_bde3_3f389a8cda3b.slice/crio-52e18e8b9d0ca062ce6efcfe63387cc7e2067fad2fd0f52876b29e7b3efb571c WatchSource:0}: Error finding container 52e18e8b9d0ca062ce6efcfe63387cc7e2067fad2fd0f52876b29e7b3efb571c: Status 404 returned error can't find the container with id 52e18e8b9d0ca062ce6efcfe63387cc7e2067fad2fd0f52876b29e7b3efb571c Feb 19 21:18:57 crc kubenswrapper[4886]: I0219 21:18:57.257209 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-cd2cs" event={"ID":"fe04f760-9b9f-4e99-80ff-467c2480540a","Type":"ContainerStarted","Data":"a887af2848e2137a53a9c1d3df6cb758c5ca5f92421958aee4ccbcc590d927d8"} Feb 19 21:18:57 crc kubenswrapper[4886]: I0219 21:18:57.257248 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-cd2cs" event={"ID":"fe04f760-9b9f-4e99-80ff-467c2480540a","Type":"ContainerStarted","Data":"b44d91933619e155099a13a24014afa75e5eb779384c44eb31de010e37af6181"} Feb 19 21:18:57 crc kubenswrapper[4886]: I0219 21:18:57.259095 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3","Type":"ContainerStarted","Data":"50a2f7f892456319182e9497ded428e0d67041d805774f65bc09a05b69d0f018"} Feb 19 21:18:57 crc kubenswrapper[4886]: I0219 21:18:57.260759 4886 generic.go:334] "Generic (PLEG): container finished" podID="a923266b-c71d-4ed9-95b1-8e04615a380b" containerID="0a9facf2580d65f7dd47abd397a68f8dab2a2d4b42945ae1277e7dab52f6c94a" exitCode=0 Feb 19 21:18:57 crc kubenswrapper[4886]: I0219 21:18:57.260818 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-r6595" event={"ID":"a923266b-c71d-4ed9-95b1-8e04615a380b","Type":"ContainerDied","Data":"0a9facf2580d65f7dd47abd397a68f8dab2a2d4b42945ae1277e7dab52f6c94a"} Feb 19 21:18:57 crc kubenswrapper[4886]: I0219 21:18:57.260837 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-r6595" event={"ID":"a923266b-c71d-4ed9-95b1-8e04615a380b","Type":"ContainerStarted","Data":"5dd86ada0a0a75e7ea735d0167334f496bfff5e2efd57c542e2d199b05a8f4b8"} Feb 19 21:18:57 crc kubenswrapper[4886]: I0219 21:18:57.262375 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b","Type":"ContainerStarted","Data":"52e18e8b9d0ca062ce6efcfe63387cc7e2067fad2fd0f52876b29e7b3efb571c"} Feb 19 21:18:57 crc kubenswrapper[4886]: I0219 21:18:57.264718 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c","Type":"ContainerStarted","Data":"bcb786f467a824c7f4b6dd3308fde21e580488251a47a7f990bf6aec87333211"} Feb 19 21:18:57 crc kubenswrapper[4886]: I0219 21:18:57.265185 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-2m9mv" Feb 19 21:18:57 crc kubenswrapper[4886]: I0219 21:18:57.295886 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-cd2cs" podStartSLOduration=2.295870829 podStartE2EDuration="2.295870829s" podCreationTimestamp="2026-02-19 21:18:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:18:57.278667519 +0000 UTC m=+1167.906510589" watchObservedRunningTime="2026-02-19 21:18:57.295870829 +0000 UTC m=+1167.923713879" Feb 19 21:18:57 crc kubenswrapper[4886]: I0219 21:18:57.306149 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=26.54865457 podStartE2EDuration="40.306133075s" podCreationTimestamp="2026-02-19 21:18:17 +0000 UTC" firstStartedPulling="2026-02-19 21:18:32.894365606 +0000 UTC m=+1143.522208656" lastFinishedPulling="2026-02-19 21:18:46.651844101 +0000 UTC m=+1157.279687161" observedRunningTime="2026-02-19 21:18:57.30072183 +0000 UTC m=+1167.928564890" watchObservedRunningTime="2026-02-19 21:18:57.306133075 +0000 UTC m=+1167.933976125" Feb 19 21:18:57 crc kubenswrapper[4886]: I0219 21:18:57.346788 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=26.705213923 podStartE2EDuration="38.34677268s" podCreationTimestamp="2026-02-19 21:18:19 +0000 UTC" firstStartedPulling="2026-02-19 21:18:35.650668011 +0000 UTC m=+1146.278511061" lastFinishedPulling="2026-02-19 21:18:47.292226768 +0000 UTC m=+1157.920069818" observedRunningTime="2026-02-19 21:18:57.333421967 +0000 UTC m=+1167.961265017" watchObservedRunningTime="2026-02-19 21:18:57.34677268 +0000 UTC m=+1167.974615730" Feb 19 21:18:57 crc kubenswrapper[4886]: I0219 21:18:57.399057 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-2m9mv"] Feb 19 21:18:57 crc kubenswrapper[4886]: I0219 21:18:57.419293 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-2m9mv"] Feb 19 21:18:58 crc kubenswrapper[4886]: I0219 21:18:58.276421 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-r6595" event={"ID":"a923266b-c71d-4ed9-95b1-8e04615a380b","Type":"ContainerStarted","Data":"5c19303ab24e9ce0e1e1eb16f09dced0bfc3e190aadfd4f7628cf92330175d81"} Feb 19 21:18:58 crc kubenswrapper[4886]: I0219 21:18:58.291378 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-r6595" podStartSLOduration=3.2913629970000002 podStartE2EDuration="3.291362997s" podCreationTimestamp="2026-02-19 21:18:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:18:58.289918731 +0000 UTC m=+1168.917761781" watchObservedRunningTime="2026-02-19 21:18:58.291362997 +0000 UTC m=+1168.919206047" Feb 19 21:18:58 crc kubenswrapper[4886]: I0219 21:18:58.664895 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4392312-aceb-48b6-947c-fe12ac84dcc4" path="/var/lib/kubelet/pods/c4392312-aceb-48b6-947c-fe12ac84dcc4/volumes" Feb 19 21:18:59 crc kubenswrapper[4886]: I0219 21:18:59.291413 4886 generic.go:334] "Generic (PLEG): container finished" podID="8cccf7aa-d200-47d7-96d4-cf6b048f966e" containerID="67c38c89f5bb22932e22d47703fc0fe71f7a8b826a0f8ee230b5ad9693ac5bcc" exitCode=0 Feb 19 21:18:59 crc kubenswrapper[4886]: I0219 21:18:59.291494 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8cccf7aa-d200-47d7-96d4-cf6b048f966e","Type":"ContainerDied","Data":"67c38c89f5bb22932e22d47703fc0fe71f7a8b826a0f8ee230b5ad9693ac5bcc"} Feb 19 21:18:59 crc kubenswrapper[4886]: I0219 21:18:59.299802 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b","Type":"ContainerStarted","Data":"1b53af592b452074ecd5b190adde9feb92d88b44cb99ee5c62a0be40d28043bf"} Feb 19 21:18:59 crc kubenswrapper[4886]: I0219 21:18:59.299838 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ac6e9fa9-1c1c-4625-bde3-3f389a8cda3b","Type":"ContainerStarted","Data":"1fd16c2732d730d8c286bcda07039860d4ae1fe941c8000b71c0dfc7f9bd5954"} Feb 19 21:18:59 crc kubenswrapper[4886]: I0219 21:18:59.299897 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 19 21:18:59 crc kubenswrapper[4886]: I0219 21:18:59.299967 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-r6595" Feb 19 21:18:59 crc kubenswrapper[4886]: I0219 21:18:59.364608 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.894108083 podStartE2EDuration="4.364592737s" podCreationTimestamp="2026-02-19 21:18:55 +0000 UTC" firstStartedPulling="2026-02-19 21:18:56.872247246 +0000 UTC m=+1167.500090296" lastFinishedPulling="2026-02-19 21:18:58.34273189 +0000 UTC m=+1168.970574950" observedRunningTime="2026-02-19 21:18:59.361837928 +0000 UTC m=+1169.989680978" watchObservedRunningTime="2026-02-19 21:18:59.364592737 +0000 UTC m=+1169.992435787" Feb 19 21:18:59 crc kubenswrapper[4886]: I0219 21:18:59.635527 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 19 21:18:59 crc kubenswrapper[4886]: I0219 21:18:59.635805 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 19 21:19:00 crc kubenswrapper[4886]: I0219 21:19:00.813160 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 19 21:19:00 crc kubenswrapper[4886]: I0219 21:19:00.813243 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 19 21:19:03 crc kubenswrapper[4886]: I0219 21:19:03.621984 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 21:19:03 crc kubenswrapper[4886]: I0219 21:19:03.771548 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-r6595"] Feb 19 21:19:03 crc kubenswrapper[4886]: I0219 21:19:03.771824 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-r6595" podUID="a923266b-c71d-4ed9-95b1-8e04615a380b" containerName="dnsmasq-dns" containerID="cri-o://5c19303ab24e9ce0e1e1eb16f09dced0bfc3e190aadfd4f7628cf92330175d81" gracePeriod=10 Feb 19 21:19:03 crc kubenswrapper[4886]: I0219 21:19:03.781494 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-r6595" Feb 19 21:19:03 crc kubenswrapper[4886]: I0219 21:19:03.802106 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-w68nz"] Feb 19 21:19:03 crc kubenswrapper[4886]: I0219 21:19:03.805125 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-w68nz" Feb 19 21:19:03 crc kubenswrapper[4886]: I0219 21:19:03.831232 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-w68nz"] Feb 19 21:19:03 crc kubenswrapper[4886]: I0219 21:19:03.907230 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24f96e8e-1d05-40aa-ac1f-20450b541c44-dns-svc\") pod \"dnsmasq-dns-698758b865-w68nz\" (UID: \"24f96e8e-1d05-40aa-ac1f-20450b541c44\") " pod="openstack/dnsmasq-dns-698758b865-w68nz" Feb 19 21:19:03 crc kubenswrapper[4886]: I0219 21:19:03.907918 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24f96e8e-1d05-40aa-ac1f-20450b541c44-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-w68nz\" (UID: \"24f96e8e-1d05-40aa-ac1f-20450b541c44\") " pod="openstack/dnsmasq-dns-698758b865-w68nz" Feb 19 21:19:03 crc kubenswrapper[4886]: I0219 21:19:03.908005 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw79h\" (UniqueName: \"kubernetes.io/projected/24f96e8e-1d05-40aa-ac1f-20450b541c44-kube-api-access-rw79h\") pod \"dnsmasq-dns-698758b865-w68nz\" (UID: \"24f96e8e-1d05-40aa-ac1f-20450b541c44\") " pod="openstack/dnsmasq-dns-698758b865-w68nz" Feb 19 21:19:03 crc kubenswrapper[4886]: I0219 21:19:03.908039 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24f96e8e-1d05-40aa-ac1f-20450b541c44-config\") pod \"dnsmasq-dns-698758b865-w68nz\" (UID: \"24f96e8e-1d05-40aa-ac1f-20450b541c44\") " pod="openstack/dnsmasq-dns-698758b865-w68nz" Feb 19 21:19:03 crc kubenswrapper[4886]: I0219 21:19:03.908225 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24f96e8e-1d05-40aa-ac1f-20450b541c44-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-w68nz\" (UID: \"24f96e8e-1d05-40aa-ac1f-20450b541c44\") " pod="openstack/dnsmasq-dns-698758b865-w68nz" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.011873 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24f96e8e-1d05-40aa-ac1f-20450b541c44-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-w68nz\" (UID: \"24f96e8e-1d05-40aa-ac1f-20450b541c44\") " pod="openstack/dnsmasq-dns-698758b865-w68nz" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.012012 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24f96e8e-1d05-40aa-ac1f-20450b541c44-dns-svc\") pod \"dnsmasq-dns-698758b865-w68nz\" (UID: \"24f96e8e-1d05-40aa-ac1f-20450b541c44\") " pod="openstack/dnsmasq-dns-698758b865-w68nz" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.012113 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24f96e8e-1d05-40aa-ac1f-20450b541c44-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-w68nz\" (UID: \"24f96e8e-1d05-40aa-ac1f-20450b541c44\") " pod="openstack/dnsmasq-dns-698758b865-w68nz" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.012155 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw79h\" (UniqueName: \"kubernetes.io/projected/24f96e8e-1d05-40aa-ac1f-20450b541c44-kube-api-access-rw79h\") pod \"dnsmasq-dns-698758b865-w68nz\" (UID: \"24f96e8e-1d05-40aa-ac1f-20450b541c44\") " pod="openstack/dnsmasq-dns-698758b865-w68nz" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.012184 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24f96e8e-1d05-40aa-ac1f-20450b541c44-config\") pod \"dnsmasq-dns-698758b865-w68nz\" (UID: \"24f96e8e-1d05-40aa-ac1f-20450b541c44\") " pod="openstack/dnsmasq-dns-698758b865-w68nz" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.013172 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24f96e8e-1d05-40aa-ac1f-20450b541c44-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-w68nz\" (UID: \"24f96e8e-1d05-40aa-ac1f-20450b541c44\") " pod="openstack/dnsmasq-dns-698758b865-w68nz" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.017042 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24f96e8e-1d05-40aa-ac1f-20450b541c44-config\") pod \"dnsmasq-dns-698758b865-w68nz\" (UID: \"24f96e8e-1d05-40aa-ac1f-20450b541c44\") " pod="openstack/dnsmasq-dns-698758b865-w68nz" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.025926 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24f96e8e-1d05-40aa-ac1f-20450b541c44-dns-svc\") pod \"dnsmasq-dns-698758b865-w68nz\" (UID: \"24f96e8e-1d05-40aa-ac1f-20450b541c44\") " pod="openstack/dnsmasq-dns-698758b865-w68nz" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.026650 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24f96e8e-1d05-40aa-ac1f-20450b541c44-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-w68nz\" (UID: \"24f96e8e-1d05-40aa-ac1f-20450b541c44\") " pod="openstack/dnsmasq-dns-698758b865-w68nz" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.041731 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw79h\" (UniqueName: \"kubernetes.io/projected/24f96e8e-1d05-40aa-ac1f-20450b541c44-kube-api-access-rw79h\") pod \"dnsmasq-dns-698758b865-w68nz\" (UID: \"24f96e8e-1d05-40aa-ac1f-20450b541c44\") " pod="openstack/dnsmasq-dns-698758b865-w68nz" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.148747 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-w68nz" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.417020 4886 generic.go:334] "Generic (PLEG): container finished" podID="a923266b-c71d-4ed9-95b1-8e04615a380b" containerID="5c19303ab24e9ce0e1e1eb16f09dced0bfc3e190aadfd4f7628cf92330175d81" exitCode=0 Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.417373 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-r6595" event={"ID":"a923266b-c71d-4ed9-95b1-8e04615a380b","Type":"ContainerDied","Data":"5c19303ab24e9ce0e1e1eb16f09dced0bfc3e190aadfd4f7628cf92330175d81"} Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.506984 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.578755 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-r6595" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.634253 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.745496 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a923266b-c71d-4ed9-95b1-8e04615a380b-ovsdbserver-sb\") pod \"a923266b-c71d-4ed9-95b1-8e04615a380b\" (UID: \"a923266b-c71d-4ed9-95b1-8e04615a380b\") " Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.745607 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a923266b-c71d-4ed9-95b1-8e04615a380b-config\") pod \"a923266b-c71d-4ed9-95b1-8e04615a380b\" (UID: \"a923266b-c71d-4ed9-95b1-8e04615a380b\") " Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.745780 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a923266b-c71d-4ed9-95b1-8e04615a380b-ovsdbserver-nb\") pod \"a923266b-c71d-4ed9-95b1-8e04615a380b\" (UID: \"a923266b-c71d-4ed9-95b1-8e04615a380b\") " Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.745818 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a923266b-c71d-4ed9-95b1-8e04615a380b-dns-svc\") pod \"a923266b-c71d-4ed9-95b1-8e04615a380b\" (UID: \"a923266b-c71d-4ed9-95b1-8e04615a380b\") " Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.745852 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8msz\" (UniqueName: \"kubernetes.io/projected/a923266b-c71d-4ed9-95b1-8e04615a380b-kube-api-access-g8msz\") pod \"a923266b-c71d-4ed9-95b1-8e04615a380b\" (UID: \"a923266b-c71d-4ed9-95b1-8e04615a380b\") " Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.755965 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a923266b-c71d-4ed9-95b1-8e04615a380b-kube-api-access-g8msz" (OuterVolumeSpecName: "kube-api-access-g8msz") pod "a923266b-c71d-4ed9-95b1-8e04615a380b" (UID: "a923266b-c71d-4ed9-95b1-8e04615a380b"). InnerVolumeSpecName "kube-api-access-g8msz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.793509 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a923266b-c71d-4ed9-95b1-8e04615a380b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a923266b-c71d-4ed9-95b1-8e04615a380b" (UID: "a923266b-c71d-4ed9-95b1-8e04615a380b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.801411 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a923266b-c71d-4ed9-95b1-8e04615a380b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a923266b-c71d-4ed9-95b1-8e04615a380b" (UID: "a923266b-c71d-4ed9-95b1-8e04615a380b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.814861 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a923266b-c71d-4ed9-95b1-8e04615a380b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a923266b-c71d-4ed9-95b1-8e04615a380b" (UID: "a923266b-c71d-4ed9-95b1-8e04615a380b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.822477 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a923266b-c71d-4ed9-95b1-8e04615a380b-config" (OuterVolumeSpecName: "config") pod "a923266b-c71d-4ed9-95b1-8e04615a380b" (UID: "a923266b-c71d-4ed9-95b1-8e04615a380b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.849567 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a923266b-c71d-4ed9-95b1-8e04615a380b-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.849598 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a923266b-c71d-4ed9-95b1-8e04615a380b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.849609 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a923266b-c71d-4ed9-95b1-8e04615a380b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.849620 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8msz\" (UniqueName: \"kubernetes.io/projected/a923266b-c71d-4ed9-95b1-8e04615a380b-kube-api-access-g8msz\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.849628 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a923266b-c71d-4ed9-95b1-8e04615a380b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.865287 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-w68nz"] Feb 19 21:19:04 crc kubenswrapper[4886]: W0219 21:19:04.881780 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24f96e8e_1d05_40aa_ac1f_20450b541c44.slice/crio-8f211fce877a2c04f9b74b8f5565c98b589d43616c6c4b7b16bea309afb9feed WatchSource:0}: Error finding container 8f211fce877a2c04f9b74b8f5565c98b589d43616c6c4b7b16bea309afb9feed: Status 404 returned error can't find the container with id 8f211fce877a2c04f9b74b8f5565c98b589d43616c6c4b7b16bea309afb9feed Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.930505 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 19 21:19:04 crc kubenswrapper[4886]: E0219 21:19:04.931247 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a923266b-c71d-4ed9-95b1-8e04615a380b" containerName="init" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.931278 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a923266b-c71d-4ed9-95b1-8e04615a380b" containerName="init" Feb 19 21:19:04 crc kubenswrapper[4886]: E0219 21:19:04.931296 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a923266b-c71d-4ed9-95b1-8e04615a380b" containerName="dnsmasq-dns" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.931303 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a923266b-c71d-4ed9-95b1-8e04615a380b" containerName="dnsmasq-dns" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.931499 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="a923266b-c71d-4ed9-95b1-8e04615a380b" containerName="dnsmasq-dns" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.937293 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.939932 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.940252 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.940365 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-tp6pw" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.940705 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 19 21:19:04 crc kubenswrapper[4886]: I0219 21:19:04.964330 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.054694 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-938d921e-4842-4d56-bedc-1a76de1e61a1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-938d921e-4842-4d56-bedc-1a76de1e61a1\") pod \"swift-storage-0\" (UID: \"da7809cc-c661-4b8c-ab78-4f87229b18d1\") " pod="openstack/swift-storage-0" Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.054753 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttz7x\" (UniqueName: \"kubernetes.io/projected/da7809cc-c661-4b8c-ab78-4f87229b18d1-kube-api-access-ttz7x\") pod \"swift-storage-0\" (UID: \"da7809cc-c661-4b8c-ab78-4f87229b18d1\") " pod="openstack/swift-storage-0" Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.054776 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/da7809cc-c661-4b8c-ab78-4f87229b18d1-lock\") pod \"swift-storage-0\" (UID: \"da7809cc-c661-4b8c-ab78-4f87229b18d1\") " pod="openstack/swift-storage-0" Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.054822 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/da7809cc-c661-4b8c-ab78-4f87229b18d1-etc-swift\") pod \"swift-storage-0\" (UID: \"da7809cc-c661-4b8c-ab78-4f87229b18d1\") " pod="openstack/swift-storage-0" Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.054906 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/da7809cc-c661-4b8c-ab78-4f87229b18d1-cache\") pod \"swift-storage-0\" (UID: \"da7809cc-c661-4b8c-ab78-4f87229b18d1\") " pod="openstack/swift-storage-0" Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.054942 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da7809cc-c661-4b8c-ab78-4f87229b18d1-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"da7809cc-c661-4b8c-ab78-4f87229b18d1\") " pod="openstack/swift-storage-0" Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.157060 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/da7809cc-c661-4b8c-ab78-4f87229b18d1-cache\") pod \"swift-storage-0\" (UID: \"da7809cc-c661-4b8c-ab78-4f87229b18d1\") " pod="openstack/swift-storage-0" Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.157167 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da7809cc-c661-4b8c-ab78-4f87229b18d1-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"da7809cc-c661-4b8c-ab78-4f87229b18d1\") " pod="openstack/swift-storage-0" Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.157293 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-938d921e-4842-4d56-bedc-1a76de1e61a1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-938d921e-4842-4d56-bedc-1a76de1e61a1\") pod \"swift-storage-0\" (UID: \"da7809cc-c661-4b8c-ab78-4f87229b18d1\") " pod="openstack/swift-storage-0" Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.157343 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttz7x\" (UniqueName: \"kubernetes.io/projected/da7809cc-c661-4b8c-ab78-4f87229b18d1-kube-api-access-ttz7x\") pod \"swift-storage-0\" (UID: \"da7809cc-c661-4b8c-ab78-4f87229b18d1\") " pod="openstack/swift-storage-0" Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.157386 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/da7809cc-c661-4b8c-ab78-4f87229b18d1-lock\") pod \"swift-storage-0\" (UID: \"da7809cc-c661-4b8c-ab78-4f87229b18d1\") " pod="openstack/swift-storage-0" Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.157545 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/da7809cc-c661-4b8c-ab78-4f87229b18d1-cache\") pod \"swift-storage-0\" (UID: \"da7809cc-c661-4b8c-ab78-4f87229b18d1\") " pod="openstack/swift-storage-0" Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.157884 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/da7809cc-c661-4b8c-ab78-4f87229b18d1-lock\") pod \"swift-storage-0\" (UID: \"da7809cc-c661-4b8c-ab78-4f87229b18d1\") " pod="openstack/swift-storage-0" Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.158967 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/da7809cc-c661-4b8c-ab78-4f87229b18d1-etc-swift\") pod \"swift-storage-0\" (UID: \"da7809cc-c661-4b8c-ab78-4f87229b18d1\") " pod="openstack/swift-storage-0" Feb 19 21:19:05 crc kubenswrapper[4886]: E0219 21:19:05.159287 4886 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 21:19:05 crc kubenswrapper[4886]: E0219 21:19:05.159373 4886 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 21:19:05 crc kubenswrapper[4886]: E0219 21:19:05.159418 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/da7809cc-c661-4b8c-ab78-4f87229b18d1-etc-swift podName:da7809cc-c661-4b8c-ab78-4f87229b18d1 nodeName:}" failed. No retries permitted until 2026-02-19 21:19:05.659400716 +0000 UTC m=+1176.287243776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/da7809cc-c661-4b8c-ab78-4f87229b18d1-etc-swift") pod "swift-storage-0" (UID: "da7809cc-c661-4b8c-ab78-4f87229b18d1") : configmap "swift-ring-files" not found Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.170144 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.170193 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-938d921e-4842-4d56-bedc-1a76de1e61a1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-938d921e-4842-4d56-bedc-1a76de1e61a1\") pod \"swift-storage-0\" (UID: \"da7809cc-c661-4b8c-ab78-4f87229b18d1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/026987dba9185d7da694c87d56f3c87a5d3b33bf8fc5824dc932a1cd87497891/globalmount\"" pod="openstack/swift-storage-0" Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.178379 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da7809cc-c661-4b8c-ab78-4f87229b18d1-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"da7809cc-c661-4b8c-ab78-4f87229b18d1\") " pod="openstack/swift-storage-0" Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.179660 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttz7x\" (UniqueName: \"kubernetes.io/projected/da7809cc-c661-4b8c-ab78-4f87229b18d1-kube-api-access-ttz7x\") pod \"swift-storage-0\" (UID: \"da7809cc-c661-4b8c-ab78-4f87229b18d1\") " pod="openstack/swift-storage-0" Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.215660 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-938d921e-4842-4d56-bedc-1a76de1e61a1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-938d921e-4842-4d56-bedc-1a76de1e61a1\") pod \"swift-storage-0\" (UID: \"da7809cc-c661-4b8c-ab78-4f87229b18d1\") " pod="openstack/swift-storage-0" Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.428416 4886 generic.go:334] "Generic (PLEG): container finished" podID="24f96e8e-1d05-40aa-ac1f-20450b541c44" containerID="5dd7627d20ae35f0565f20b254e66f331f5a254c6ecfc1f57a5d1629745417f2" exitCode=0 Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.428511 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-w68nz" event={"ID":"24f96e8e-1d05-40aa-ac1f-20450b541c44","Type":"ContainerDied","Data":"5dd7627d20ae35f0565f20b254e66f331f5a254c6ecfc1f57a5d1629745417f2"} Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.428768 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-w68nz" event={"ID":"24f96e8e-1d05-40aa-ac1f-20450b541c44","Type":"ContainerStarted","Data":"8f211fce877a2c04f9b74b8f5565c98b589d43616c6c4b7b16bea309afb9feed"} Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.439022 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-r6595" Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.440446 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-r6595" event={"ID":"a923266b-c71d-4ed9-95b1-8e04615a380b","Type":"ContainerDied","Data":"5dd86ada0a0a75e7ea735d0167334f496bfff5e2efd57c542e2d199b05a8f4b8"} Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.440519 4886 scope.go:117] "RemoveContainer" containerID="5c19303ab24e9ce0e1e1eb16f09dced0bfc3e190aadfd4f7628cf92330175d81" Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.495962 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-r6595"] Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.506880 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-r6595"] Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.525996 4886 scope.go:117] "RemoveContainer" containerID="0a9facf2580d65f7dd47abd397a68f8dab2a2d4b42945ae1277e7dab52f6c94a" Feb 19 21:19:05 crc kubenswrapper[4886]: I0219 21:19:05.669869 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/da7809cc-c661-4b8c-ab78-4f87229b18d1-etc-swift\") pod \"swift-storage-0\" (UID: \"da7809cc-c661-4b8c-ab78-4f87229b18d1\") " pod="openstack/swift-storage-0" Feb 19 21:19:05 crc kubenswrapper[4886]: E0219 21:19:05.670092 4886 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 21:19:05 crc kubenswrapper[4886]: E0219 21:19:05.670107 4886 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 21:19:05 crc kubenswrapper[4886]: E0219 21:19:05.670148 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/da7809cc-c661-4b8c-ab78-4f87229b18d1-etc-swift podName:da7809cc-c661-4b8c-ab78-4f87229b18d1 nodeName:}" failed. No retries permitted until 2026-02-19 21:19:06.670134903 +0000 UTC m=+1177.297977953 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/da7809cc-c661-4b8c-ab78-4f87229b18d1-etc-swift") pod "swift-storage-0" (UID: "da7809cc-c661-4b8c-ab78-4f87229b18d1") : configmap "swift-ring-files" not found Feb 19 21:19:06 crc kubenswrapper[4886]: I0219 21:19:06.202822 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e29c-account-create-update-2897c"] Feb 19 21:19:06 crc kubenswrapper[4886]: I0219 21:19:06.204438 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e29c-account-create-update-2897c" Feb 19 21:19:06 crc kubenswrapper[4886]: I0219 21:19:06.207684 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 19 21:19:06 crc kubenswrapper[4886]: I0219 21:19:06.234226 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e29c-account-create-update-2897c"] Feb 19 21:19:06 crc kubenswrapper[4886]: I0219 21:19:06.246367 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-cvm86"] Feb 19 21:19:06 crc kubenswrapper[4886]: I0219 21:19:06.248067 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cvm86" Feb 19 21:19:06 crc kubenswrapper[4886]: I0219 21:19:06.273543 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-cvm86"] Feb 19 21:19:06 crc kubenswrapper[4886]: I0219 21:19:06.384875 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2dvf\" (UniqueName: \"kubernetes.io/projected/240c4666-ec12-4498-9f53-dd95d7a33ed4-kube-api-access-k2dvf\") pod \"glance-e29c-account-create-update-2897c\" (UID: \"240c4666-ec12-4498-9f53-dd95d7a33ed4\") " pod="openstack/glance-e29c-account-create-update-2897c" Feb 19 21:19:06 crc kubenswrapper[4886]: I0219 21:19:06.384955 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06257e23-2243-4646-a2fb-95b947d5c466-operator-scripts\") pod \"glance-db-create-cvm86\" (UID: \"06257e23-2243-4646-a2fb-95b947d5c466\") " pod="openstack/glance-db-create-cvm86" Feb 19 21:19:06 crc kubenswrapper[4886]: I0219 21:19:06.385129 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/240c4666-ec12-4498-9f53-dd95d7a33ed4-operator-scripts\") pod \"glance-e29c-account-create-update-2897c\" (UID: \"240c4666-ec12-4498-9f53-dd95d7a33ed4\") " pod="openstack/glance-e29c-account-create-update-2897c" Feb 19 21:19:06 crc kubenswrapper[4886]: I0219 21:19:06.385219 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfhlb\" (UniqueName: \"kubernetes.io/projected/06257e23-2243-4646-a2fb-95b947d5c466-kube-api-access-kfhlb\") pod \"glance-db-create-cvm86\" (UID: \"06257e23-2243-4646-a2fb-95b947d5c466\") " pod="openstack/glance-db-create-cvm86" Feb 19 21:19:06 crc kubenswrapper[4886]: I0219 21:19:06.464931 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-w68nz" event={"ID":"24f96e8e-1d05-40aa-ac1f-20450b541c44","Type":"ContainerStarted","Data":"8db1c8185e9fd2c2375563d15ac049e3c2cd18b62ca586d08dcc6fafb3474d47"} Feb 19 21:19:06 crc kubenswrapper[4886]: I0219 21:19:06.466176 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-w68nz" Feb 19 21:19:06 crc kubenswrapper[4886]: I0219 21:19:06.487552 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2dvf\" (UniqueName: \"kubernetes.io/projected/240c4666-ec12-4498-9f53-dd95d7a33ed4-kube-api-access-k2dvf\") pod \"glance-e29c-account-create-update-2897c\" (UID: \"240c4666-ec12-4498-9f53-dd95d7a33ed4\") " pod="openstack/glance-e29c-account-create-update-2897c" Feb 19 21:19:06 crc kubenswrapper[4886]: I0219 21:19:06.487620 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06257e23-2243-4646-a2fb-95b947d5c466-operator-scripts\") pod \"glance-db-create-cvm86\" (UID: \"06257e23-2243-4646-a2fb-95b947d5c466\") " pod="openstack/glance-db-create-cvm86" Feb 19 21:19:06 crc kubenswrapper[4886]: I0219 21:19:06.487769 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/240c4666-ec12-4498-9f53-dd95d7a33ed4-operator-scripts\") pod \"glance-e29c-account-create-update-2897c\" (UID: \"240c4666-ec12-4498-9f53-dd95d7a33ed4\") " pod="openstack/glance-e29c-account-create-update-2897c" Feb 19 21:19:06 crc kubenswrapper[4886]: I0219 21:19:06.487834 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfhlb\" (UniqueName: \"kubernetes.io/projected/06257e23-2243-4646-a2fb-95b947d5c466-kube-api-access-kfhlb\") pod \"glance-db-create-cvm86\" (UID: \"06257e23-2243-4646-a2fb-95b947d5c466\") " pod="openstack/glance-db-create-cvm86" Feb 19 21:19:06 crc kubenswrapper[4886]: I0219 21:19:06.489528 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06257e23-2243-4646-a2fb-95b947d5c466-operator-scripts\") pod \"glance-db-create-cvm86\" (UID: \"06257e23-2243-4646-a2fb-95b947d5c466\") " pod="openstack/glance-db-create-cvm86" Feb 19 21:19:06 crc kubenswrapper[4886]: I0219 21:19:06.490115 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/240c4666-ec12-4498-9f53-dd95d7a33ed4-operator-scripts\") pod \"glance-e29c-account-create-update-2897c\" (UID: \"240c4666-ec12-4498-9f53-dd95d7a33ed4\") " pod="openstack/glance-e29c-account-create-update-2897c" Feb 19 21:19:06 crc kubenswrapper[4886]: I0219 21:19:06.521685 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-w68nz" podStartSLOduration=3.521652485 podStartE2EDuration="3.521652485s" podCreationTimestamp="2026-02-19 21:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:19:06.514479546 +0000 UTC m=+1177.142322596" watchObservedRunningTime="2026-02-19 21:19:06.521652485 +0000 UTC m=+1177.149495535" Feb 19 21:19:06 crc kubenswrapper[4886]: I0219 21:19:06.535144 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2dvf\" (UniqueName: \"kubernetes.io/projected/240c4666-ec12-4498-9f53-dd95d7a33ed4-kube-api-access-k2dvf\") pod \"glance-e29c-account-create-update-2897c\" (UID: \"240c4666-ec12-4498-9f53-dd95d7a33ed4\") " pod="openstack/glance-e29c-account-create-update-2897c" Feb 19 21:19:06 crc kubenswrapper[4886]: I0219 21:19:06.554941 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfhlb\" (UniqueName: \"kubernetes.io/projected/06257e23-2243-4646-a2fb-95b947d5c466-kube-api-access-kfhlb\") pod \"glance-db-create-cvm86\" (UID: \"06257e23-2243-4646-a2fb-95b947d5c466\") " pod="openstack/glance-db-create-cvm86" Feb 19 21:19:06 crc kubenswrapper[4886]: I0219 21:19:06.575124 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cvm86" Feb 19 21:19:06 crc kubenswrapper[4886]: I0219 21:19:06.657825 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a923266b-c71d-4ed9-95b1-8e04615a380b" path="/var/lib/kubelet/pods/a923266b-c71d-4ed9-95b1-8e04615a380b/volumes" Feb 19 21:19:06 crc kubenswrapper[4886]: I0219 21:19:06.698498 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/da7809cc-c661-4b8c-ab78-4f87229b18d1-etc-swift\") pod \"swift-storage-0\" (UID: \"da7809cc-c661-4b8c-ab78-4f87229b18d1\") " pod="openstack/swift-storage-0" Feb 19 21:19:06 crc kubenswrapper[4886]: E0219 21:19:06.699408 4886 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 21:19:06 crc kubenswrapper[4886]: E0219 21:19:06.699427 4886 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 21:19:06 crc kubenswrapper[4886]: E0219 21:19:06.699471 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/da7809cc-c661-4b8c-ab78-4f87229b18d1-etc-swift podName:da7809cc-c661-4b8c-ab78-4f87229b18d1 nodeName:}" failed. No retries permitted until 2026-02-19 21:19:08.699454527 +0000 UTC m=+1179.327297577 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/da7809cc-c661-4b8c-ab78-4f87229b18d1-etc-swift") pod "swift-storage-0" (UID: "da7809cc-c661-4b8c-ab78-4f87229b18d1") : configmap "swift-ring-files" not found Feb 19 21:19:07 crc kubenswrapper[4886]: I0219 21:19:06.832169 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e29c-account-create-update-2897c" Feb 19 21:19:07 crc kubenswrapper[4886]: I0219 21:19:07.637110 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e29c-account-create-update-2897c"] Feb 19 21:19:07 crc kubenswrapper[4886]: I0219 21:19:07.672452 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-cvm86"] Feb 19 21:19:07 crc kubenswrapper[4886]: W0219 21:19:07.685416 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06257e23_2243_4646_a2fb_95b947d5c466.slice/crio-432aa2c319eacaef65b8f3fcedc53f83df6844cd5aab54dd026667b9e9de5385 WatchSource:0}: Error finding container 432aa2c319eacaef65b8f3fcedc53f83df6844cd5aab54dd026667b9e9de5385: Status 404 returned error can't find the container with id 432aa2c319eacaef65b8f3fcedc53f83df6844cd5aab54dd026667b9e9de5385 Feb 19 21:19:07 crc kubenswrapper[4886]: I0219 21:19:07.964410 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hbm8q"] Feb 19 21:19:07 crc kubenswrapper[4886]: I0219 21:19:07.965856 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hbm8q" Feb 19 21:19:07 crc kubenswrapper[4886]: I0219 21:19:07.968037 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 19 21:19:07 crc kubenswrapper[4886]: I0219 21:19:07.973334 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hbm8q"] Feb 19 21:19:08 crc kubenswrapper[4886]: I0219 21:19:08.127867 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkjwz\" (UniqueName: \"kubernetes.io/projected/03736bf5-9857-4ca1-b9f8-6474cfefb230-kube-api-access-gkjwz\") pod \"root-account-create-update-hbm8q\" (UID: \"03736bf5-9857-4ca1-b9f8-6474cfefb230\") " pod="openstack/root-account-create-update-hbm8q" Feb 19 21:19:08 crc kubenswrapper[4886]: I0219 21:19:08.128174 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03736bf5-9857-4ca1-b9f8-6474cfefb230-operator-scripts\") pod \"root-account-create-update-hbm8q\" (UID: \"03736bf5-9857-4ca1-b9f8-6474cfefb230\") " pod="openstack/root-account-create-update-hbm8q" Feb 19 21:19:08 crc kubenswrapper[4886]: I0219 21:19:08.230488 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkjwz\" (UniqueName: \"kubernetes.io/projected/03736bf5-9857-4ca1-b9f8-6474cfefb230-kube-api-access-gkjwz\") pod \"root-account-create-update-hbm8q\" (UID: \"03736bf5-9857-4ca1-b9f8-6474cfefb230\") " pod="openstack/root-account-create-update-hbm8q" Feb 19 21:19:08 crc kubenswrapper[4886]: I0219 21:19:08.230661 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03736bf5-9857-4ca1-b9f8-6474cfefb230-operator-scripts\") pod \"root-account-create-update-hbm8q\" (UID: \"03736bf5-9857-4ca1-b9f8-6474cfefb230\") " pod="openstack/root-account-create-update-hbm8q" Feb 19 21:19:08 crc kubenswrapper[4886]: I0219 21:19:08.231435 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03736bf5-9857-4ca1-b9f8-6474cfefb230-operator-scripts\") pod \"root-account-create-update-hbm8q\" (UID: \"03736bf5-9857-4ca1-b9f8-6474cfefb230\") " pod="openstack/root-account-create-update-hbm8q" Feb 19 21:19:08 crc kubenswrapper[4886]: I0219 21:19:08.249768 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkjwz\" (UniqueName: \"kubernetes.io/projected/03736bf5-9857-4ca1-b9f8-6474cfefb230-kube-api-access-gkjwz\") pod \"root-account-create-update-hbm8q\" (UID: \"03736bf5-9857-4ca1-b9f8-6474cfefb230\") " pod="openstack/root-account-create-update-hbm8q" Feb 19 21:19:08 crc kubenswrapper[4886]: I0219 21:19:08.281828 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hbm8q" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:08.486518 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cvm86" event={"ID":"06257e23-2243-4646-a2fb-95b947d5c466","Type":"ContainerStarted","Data":"034f95ca8c9086225c1e862d6f52a18213ac7a971623c65163c9a41a863fcbbd"} Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:08.486787 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cvm86" event={"ID":"06257e23-2243-4646-a2fb-95b947d5c466","Type":"ContainerStarted","Data":"432aa2c319eacaef65b8f3fcedc53f83df6844cd5aab54dd026667b9e9de5385"} Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:08.496715 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e29c-account-create-update-2897c" event={"ID":"240c4666-ec12-4498-9f53-dd95d7a33ed4","Type":"ContainerStarted","Data":"2bab4fa65f6ff951fc44be98722e578c8fb52c4fce42310c55441a42e14299d5"} Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:08.496750 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e29c-account-create-update-2897c" event={"ID":"240c4666-ec12-4498-9f53-dd95d7a33ed4","Type":"ContainerStarted","Data":"8e048fa7d0b81c0a94044b80b2c89a5f0a87c1e255c14a736f7f46e6dbd1915d"} Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:08.506239 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-cvm86" podStartSLOduration=2.506222552 podStartE2EDuration="2.506222552s" podCreationTimestamp="2026-02-19 21:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:19:08.501407702 +0000 UTC m=+1179.129250752" watchObservedRunningTime="2026-02-19 21:19:08.506222552 +0000 UTC m=+1179.134065602" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:08.522314 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-e29c-account-create-update-2897c" podStartSLOduration=2.522295904 podStartE2EDuration="2.522295904s" podCreationTimestamp="2026-02-19 21:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:19:08.519368021 +0000 UTC m=+1179.147211071" watchObservedRunningTime="2026-02-19 21:19:08.522295904 +0000 UTC m=+1179.150138954" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:08.740447 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/da7809cc-c661-4b8c-ab78-4f87229b18d1-etc-swift\") pod \"swift-storage-0\" (UID: \"da7809cc-c661-4b8c-ab78-4f87229b18d1\") " pod="openstack/swift-storage-0" Feb 19 21:19:10 crc kubenswrapper[4886]: E0219 21:19:08.740639 4886 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 21:19:10 crc kubenswrapper[4886]: E0219 21:19:08.740653 4886 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 21:19:10 crc kubenswrapper[4886]: E0219 21:19:08.740706 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/da7809cc-c661-4b8c-ab78-4f87229b18d1-etc-swift podName:da7809cc-c661-4b8c-ab78-4f87229b18d1 nodeName:}" failed. No retries permitted until 2026-02-19 21:19:12.740690399 +0000 UTC m=+1183.368533449 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/da7809cc-c661-4b8c-ab78-4f87229b18d1-etc-swift") pod "swift-storage-0" (UID: "da7809cc-c661-4b8c-ab78-4f87229b18d1") : configmap "swift-ring-files" not found Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:08.753841 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hbm8q"] Feb 19 21:19:10 crc kubenswrapper[4886]: W0219 21:19:08.771085 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03736bf5_9857_4ca1_b9f8_6474cfefb230.slice/crio-a8f200df453b7eb48d1e8e37f09666111d0c0586a449acd00df6a843757f5620 WatchSource:0}: Error finding container a8f200df453b7eb48d1e8e37f09666111d0c0586a449acd00df6a843757f5620: Status 404 returned error can't find the container with id a8f200df453b7eb48d1e8e37f09666111d0c0586a449acd00df6a843757f5620 Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:08.833675 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-wkj8t"] Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:08.834951 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wkj8t" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:08.837427 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:08.839507 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:08.839692 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:08.848168 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wkj8t"] Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:08.939583 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:08.943944 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9608b54a-4daa-4b61-8be4-30a5f155512d-dispersionconf\") pod \"swift-ring-rebalance-wkj8t\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " pod="openstack/swift-ring-rebalance-wkj8t" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:08.944018 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9608b54a-4daa-4b61-8be4-30a5f155512d-ring-data-devices\") pod \"swift-ring-rebalance-wkj8t\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " pod="openstack/swift-ring-rebalance-wkj8t" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:08.944045 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl7g7\" (UniqueName: \"kubernetes.io/projected/9608b54a-4daa-4b61-8be4-30a5f155512d-kube-api-access-rl7g7\") pod \"swift-ring-rebalance-wkj8t\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " pod="openstack/swift-ring-rebalance-wkj8t" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:08.944088 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9608b54a-4daa-4b61-8be4-30a5f155512d-swiftconf\") pod \"swift-ring-rebalance-wkj8t\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " pod="openstack/swift-ring-rebalance-wkj8t" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:08.944112 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9608b54a-4daa-4b61-8be4-30a5f155512d-scripts\") pod \"swift-ring-rebalance-wkj8t\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " pod="openstack/swift-ring-rebalance-wkj8t" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:08.944200 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9608b54a-4daa-4b61-8be4-30a5f155512d-combined-ca-bundle\") pod \"swift-ring-rebalance-wkj8t\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " pod="openstack/swift-ring-rebalance-wkj8t" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:08.944222 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9608b54a-4daa-4b61-8be4-30a5f155512d-etc-swift\") pod \"swift-ring-rebalance-wkj8t\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " pod="openstack/swift-ring-rebalance-wkj8t" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:09.021944 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:09.046674 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9608b54a-4daa-4b61-8be4-30a5f155512d-ring-data-devices\") pod \"swift-ring-rebalance-wkj8t\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " pod="openstack/swift-ring-rebalance-wkj8t" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:09.046722 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl7g7\" (UniqueName: \"kubernetes.io/projected/9608b54a-4daa-4b61-8be4-30a5f155512d-kube-api-access-rl7g7\") pod \"swift-ring-rebalance-wkj8t\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " pod="openstack/swift-ring-rebalance-wkj8t" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:09.046764 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9608b54a-4daa-4b61-8be4-30a5f155512d-swiftconf\") pod \"swift-ring-rebalance-wkj8t\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " pod="openstack/swift-ring-rebalance-wkj8t" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:09.046790 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9608b54a-4daa-4b61-8be4-30a5f155512d-scripts\") pod \"swift-ring-rebalance-wkj8t\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " pod="openstack/swift-ring-rebalance-wkj8t" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:09.046847 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9608b54a-4daa-4b61-8be4-30a5f155512d-combined-ca-bundle\") pod \"swift-ring-rebalance-wkj8t\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " pod="openstack/swift-ring-rebalance-wkj8t" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:09.046870 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9608b54a-4daa-4b61-8be4-30a5f155512d-etc-swift\") pod \"swift-ring-rebalance-wkj8t\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " pod="openstack/swift-ring-rebalance-wkj8t" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:09.047066 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9608b54a-4daa-4b61-8be4-30a5f155512d-dispersionconf\") pod \"swift-ring-rebalance-wkj8t\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " pod="openstack/swift-ring-rebalance-wkj8t" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:09.047755 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9608b54a-4daa-4b61-8be4-30a5f155512d-ring-data-devices\") pod \"swift-ring-rebalance-wkj8t\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " pod="openstack/swift-ring-rebalance-wkj8t" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:09.050297 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9608b54a-4daa-4b61-8be4-30a5f155512d-etc-swift\") pod \"swift-ring-rebalance-wkj8t\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " pod="openstack/swift-ring-rebalance-wkj8t" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:09.050492 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9608b54a-4daa-4b61-8be4-30a5f155512d-scripts\") pod \"swift-ring-rebalance-wkj8t\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " pod="openstack/swift-ring-rebalance-wkj8t" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:09.055166 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9608b54a-4daa-4b61-8be4-30a5f155512d-dispersionconf\") pod \"swift-ring-rebalance-wkj8t\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " pod="openstack/swift-ring-rebalance-wkj8t" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:09.055744 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9608b54a-4daa-4b61-8be4-30a5f155512d-combined-ca-bundle\") pod \"swift-ring-rebalance-wkj8t\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " pod="openstack/swift-ring-rebalance-wkj8t" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:09.056413 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9608b54a-4daa-4b61-8be4-30a5f155512d-swiftconf\") pod \"swift-ring-rebalance-wkj8t\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " pod="openstack/swift-ring-rebalance-wkj8t" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:09.063113 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl7g7\" (UniqueName: \"kubernetes.io/projected/9608b54a-4daa-4b61-8be4-30a5f155512d-kube-api-access-rl7g7\") pod \"swift-ring-rebalance-wkj8t\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " pod="openstack/swift-ring-rebalance-wkj8t" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:09.210791 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wkj8t" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:09.505500 4886 generic.go:334] "Generic (PLEG): container finished" podID="06257e23-2243-4646-a2fb-95b947d5c466" containerID="034f95ca8c9086225c1e862d6f52a18213ac7a971623c65163c9a41a863fcbbd" exitCode=0 Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:09.505581 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cvm86" event={"ID":"06257e23-2243-4646-a2fb-95b947d5c466","Type":"ContainerDied","Data":"034f95ca8c9086225c1e862d6f52a18213ac7a971623c65163c9a41a863fcbbd"} Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:09.525591 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hbm8q" event={"ID":"03736bf5-9857-4ca1-b9f8-6474cfefb230","Type":"ContainerStarted","Data":"4e8922dfb830e30a42ef115e0427951e4c6a6b6e81ca7dde9190231ea706411c"} Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:09.525650 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hbm8q" event={"ID":"03736bf5-9857-4ca1-b9f8-6474cfefb230","Type":"ContainerStarted","Data":"a8f200df453b7eb48d1e8e37f09666111d0c0586a449acd00df6a843757f5620"} Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:09.562775 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-hbm8q" podStartSLOduration=2.562759545 podStartE2EDuration="2.562759545s" podCreationTimestamp="2026-02-19 21:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:19:09.557030562 +0000 UTC m=+1180.184873612" watchObservedRunningTime="2026-02-19 21:19:09.562759545 +0000 UTC m=+1180.190602585" Feb 19 21:19:10 crc kubenswrapper[4886]: I0219 21:19:10.694335 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-79dd89cd97-kwpj9" podUID="622977a2-44e6-4860-9f42-45619a022ed3" containerName="console" containerID="cri-o://bfa0f3e0d485a92a46084ed6c959f73fe9a82d5f278ad39a12b4d2f49ca35a2a" gracePeriod=15 Feb 19 21:19:11 crc kubenswrapper[4886]: I0219 21:19:11.014190 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wkj8t"] Feb 19 21:19:11 crc kubenswrapper[4886]: I0219 21:19:11.554123 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-79dd89cd97-kwpj9_622977a2-44e6-4860-9f42-45619a022ed3/console/0.log" Feb 19 21:19:11 crc kubenswrapper[4886]: I0219 21:19:11.554709 4886 generic.go:334] "Generic (PLEG): container finished" podID="622977a2-44e6-4860-9f42-45619a022ed3" containerID="bfa0f3e0d485a92a46084ed6c959f73fe9a82d5f278ad39a12b4d2f49ca35a2a" exitCode=2 Feb 19 21:19:11 crc kubenswrapper[4886]: I0219 21:19:11.554755 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79dd89cd97-kwpj9" event={"ID":"622977a2-44e6-4860-9f42-45619a022ed3","Type":"ContainerDied","Data":"bfa0f3e0d485a92a46084ed6c959f73fe9a82d5f278ad39a12b4d2f49ca35a2a"} Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.065406 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-5lpl5"] Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.067231 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5lpl5" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.097103 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5lpl5"] Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.161648 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-588e-account-create-update-tkjhl"] Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.162986 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-588e-account-create-update-tkjhl" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.168541 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.178960 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-588e-account-create-update-tkjhl"] Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.250286 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1e6646f-4bdd-478f-a451-a16e2bfc2c08-operator-scripts\") pod \"keystone-db-create-5lpl5\" (UID: \"d1e6646f-4bdd-478f-a451-a16e2bfc2c08\") " pod="openstack/keystone-db-create-5lpl5" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.250358 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2tjt\" (UniqueName: \"kubernetes.io/projected/d1e6646f-4bdd-478f-a451-a16e2bfc2c08-kube-api-access-p2tjt\") pod \"keystone-db-create-5lpl5\" (UID: \"d1e6646f-4bdd-478f-a451-a16e2bfc2c08\") " pod="openstack/keystone-db-create-5lpl5" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.266143 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-k78l4"] Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.267499 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k78l4" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.278329 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-k78l4"] Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.352151 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8rfn\" (UniqueName: \"kubernetes.io/projected/adccc6e1-c1da-45d2-aea8-8a381b733fef-kube-api-access-w8rfn\") pod \"keystone-588e-account-create-update-tkjhl\" (UID: \"adccc6e1-c1da-45d2-aea8-8a381b733fef\") " pod="openstack/keystone-588e-account-create-update-tkjhl" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.352286 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1e6646f-4bdd-478f-a451-a16e2bfc2c08-operator-scripts\") pod \"keystone-db-create-5lpl5\" (UID: \"d1e6646f-4bdd-478f-a451-a16e2bfc2c08\") " pod="openstack/keystone-db-create-5lpl5" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.354605 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2tjt\" (UniqueName: \"kubernetes.io/projected/d1e6646f-4bdd-478f-a451-a16e2bfc2c08-kube-api-access-p2tjt\") pod \"keystone-db-create-5lpl5\" (UID: \"d1e6646f-4bdd-478f-a451-a16e2bfc2c08\") " pod="openstack/keystone-db-create-5lpl5" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.354644 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adccc6e1-c1da-45d2-aea8-8a381b733fef-operator-scripts\") pod \"keystone-588e-account-create-update-tkjhl\" (UID: \"adccc6e1-c1da-45d2-aea8-8a381b733fef\") " pod="openstack/keystone-588e-account-create-update-tkjhl" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.365185 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1e6646f-4bdd-478f-a451-a16e2bfc2c08-operator-scripts\") pod \"keystone-db-create-5lpl5\" (UID: \"d1e6646f-4bdd-478f-a451-a16e2bfc2c08\") " pod="openstack/keystone-db-create-5lpl5" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.400631 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2tjt\" (UniqueName: \"kubernetes.io/projected/d1e6646f-4bdd-478f-a451-a16e2bfc2c08-kube-api-access-p2tjt\") pod \"keystone-db-create-5lpl5\" (UID: \"d1e6646f-4bdd-478f-a451-a16e2bfc2c08\") " pod="openstack/keystone-db-create-5lpl5" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.429300 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-e0b5-account-create-update-whzj7"] Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.430593 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e0b5-account-create-update-whzj7" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.433743 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.449320 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e0b5-account-create-update-whzj7"] Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.456048 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jldgq\" (UniqueName: \"kubernetes.io/projected/8f78fcbc-7dfd-45f5-8d6a-ce814efb3303-kube-api-access-jldgq\") pod \"placement-db-create-k78l4\" (UID: \"8f78fcbc-7dfd-45f5-8d6a-ce814efb3303\") " pod="openstack/placement-db-create-k78l4" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.456100 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f78fcbc-7dfd-45f5-8d6a-ce814efb3303-operator-scripts\") pod \"placement-db-create-k78l4\" (UID: \"8f78fcbc-7dfd-45f5-8d6a-ce814efb3303\") " pod="openstack/placement-db-create-k78l4" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.456127 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8rfn\" (UniqueName: \"kubernetes.io/projected/adccc6e1-c1da-45d2-aea8-8a381b733fef-kube-api-access-w8rfn\") pod \"keystone-588e-account-create-update-tkjhl\" (UID: \"adccc6e1-c1da-45d2-aea8-8a381b733fef\") " pod="openstack/keystone-588e-account-create-update-tkjhl" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.456302 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adccc6e1-c1da-45d2-aea8-8a381b733fef-operator-scripts\") pod \"keystone-588e-account-create-update-tkjhl\" (UID: \"adccc6e1-c1da-45d2-aea8-8a381b733fef\") " pod="openstack/keystone-588e-account-create-update-tkjhl" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.457028 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adccc6e1-c1da-45d2-aea8-8a381b733fef-operator-scripts\") pod \"keystone-588e-account-create-update-tkjhl\" (UID: \"adccc6e1-c1da-45d2-aea8-8a381b733fef\") " pod="openstack/keystone-588e-account-create-update-tkjhl" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.544320 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8rfn\" (UniqueName: \"kubernetes.io/projected/adccc6e1-c1da-45d2-aea8-8a381b733fef-kube-api-access-w8rfn\") pod \"keystone-588e-account-create-update-tkjhl\" (UID: \"adccc6e1-c1da-45d2-aea8-8a381b733fef\") " pod="openstack/keystone-588e-account-create-update-tkjhl" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.558573 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jldgq\" (UniqueName: \"kubernetes.io/projected/8f78fcbc-7dfd-45f5-8d6a-ce814efb3303-kube-api-access-jldgq\") pod \"placement-db-create-k78l4\" (UID: \"8f78fcbc-7dfd-45f5-8d6a-ce814efb3303\") " pod="openstack/placement-db-create-k78l4" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.558962 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f78fcbc-7dfd-45f5-8d6a-ce814efb3303-operator-scripts\") pod \"placement-db-create-k78l4\" (UID: \"8f78fcbc-7dfd-45f5-8d6a-ce814efb3303\") " pod="openstack/placement-db-create-k78l4" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.559027 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce-operator-scripts\") pod \"placement-e0b5-account-create-update-whzj7\" (UID: \"3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce\") " pod="openstack/placement-e0b5-account-create-update-whzj7" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.559086 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6mc8\" (UniqueName: \"kubernetes.io/projected/3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce-kube-api-access-p6mc8\") pod \"placement-e0b5-account-create-update-whzj7\" (UID: \"3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce\") " pod="openstack/placement-e0b5-account-create-update-whzj7" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.559801 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f78fcbc-7dfd-45f5-8d6a-ce814efb3303-operator-scripts\") pod \"placement-db-create-k78l4\" (UID: \"8f78fcbc-7dfd-45f5-8d6a-ce814efb3303\") " pod="openstack/placement-db-create-k78l4" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.585054 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jldgq\" (UniqueName: \"kubernetes.io/projected/8f78fcbc-7dfd-45f5-8d6a-ce814efb3303-kube-api-access-jldgq\") pod \"placement-db-create-k78l4\" (UID: \"8f78fcbc-7dfd-45f5-8d6a-ce814efb3303\") " pod="openstack/placement-db-create-k78l4" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.593227 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k78l4" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.660860 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce-operator-scripts\") pod \"placement-e0b5-account-create-update-whzj7\" (UID: \"3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce\") " pod="openstack/placement-e0b5-account-create-update-whzj7" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.661095 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6mc8\" (UniqueName: \"kubernetes.io/projected/3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce-kube-api-access-p6mc8\") pod \"placement-e0b5-account-create-update-whzj7\" (UID: \"3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce\") " pod="openstack/placement-e0b5-account-create-update-whzj7" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.661584 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce-operator-scripts\") pod \"placement-e0b5-account-create-update-whzj7\" (UID: \"3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce\") " pod="openstack/placement-e0b5-account-create-update-whzj7" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.664007 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-588e-account-create-update-tkjhl" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.678141 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6mc8\" (UniqueName: \"kubernetes.io/projected/3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce-kube-api-access-p6mc8\") pod \"placement-e0b5-account-create-update-whzj7\" (UID: \"3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce\") " pod="openstack/placement-e0b5-account-create-update-whzj7" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.697585 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5lpl5" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.755278 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e0b5-account-create-update-whzj7" Feb 19 21:19:12 crc kubenswrapper[4886]: I0219 21:19:12.762702 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/da7809cc-c661-4b8c-ab78-4f87229b18d1-etc-swift\") pod \"swift-storage-0\" (UID: \"da7809cc-c661-4b8c-ab78-4f87229b18d1\") " pod="openstack/swift-storage-0" Feb 19 21:19:12 crc kubenswrapper[4886]: E0219 21:19:12.762884 4886 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 21:19:12 crc kubenswrapper[4886]: E0219 21:19:12.762913 4886 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 21:19:12 crc kubenswrapper[4886]: E0219 21:19:12.762969 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/da7809cc-c661-4b8c-ab78-4f87229b18d1-etc-swift podName:da7809cc-c661-4b8c-ab78-4f87229b18d1 nodeName:}" failed. No retries permitted until 2026-02-19 21:19:20.762952868 +0000 UTC m=+1191.390795918 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/da7809cc-c661-4b8c-ab78-4f87229b18d1-etc-swift") pod "swift-storage-0" (UID: "da7809cc-c661-4b8c-ab78-4f87229b18d1") : configmap "swift-ring-files" not found Feb 19 21:19:13 crc kubenswrapper[4886]: W0219 21:19:13.178496 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9608b54a_4daa_4b61_8be4_30a5f155512d.slice/crio-0eadaff11187afcffdf071ac7974ee97f9fdb8ffe604f4579446bd4a1bfbb493 WatchSource:0}: Error finding container 0eadaff11187afcffdf071ac7974ee97f9fdb8ffe604f4579446bd4a1bfbb493: Status 404 returned error can't find the container with id 0eadaff11187afcffdf071ac7974ee97f9fdb8ffe604f4579446bd4a1bfbb493 Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.200010 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.228553 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cvm86" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.375570 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfhlb\" (UniqueName: \"kubernetes.io/projected/06257e23-2243-4646-a2fb-95b947d5c466-kube-api-access-kfhlb\") pod \"06257e23-2243-4646-a2fb-95b947d5c466\" (UID: \"06257e23-2243-4646-a2fb-95b947d5c466\") " Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.376096 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06257e23-2243-4646-a2fb-95b947d5c466-operator-scripts\") pod \"06257e23-2243-4646-a2fb-95b947d5c466\" (UID: \"06257e23-2243-4646-a2fb-95b947d5c466\") " Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.376806 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06257e23-2243-4646-a2fb-95b947d5c466-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06257e23-2243-4646-a2fb-95b947d5c466" (UID: "06257e23-2243-4646-a2fb-95b947d5c466"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.376894 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06257e23-2243-4646-a2fb-95b947d5c466-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.405323 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06257e23-2243-4646-a2fb-95b947d5c466-kube-api-access-kfhlb" (OuterVolumeSpecName: "kube-api-access-kfhlb") pod "06257e23-2243-4646-a2fb-95b947d5c466" (UID: "06257e23-2243-4646-a2fb-95b947d5c466"). InnerVolumeSpecName "kube-api-access-kfhlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.423181 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vmhgw"] Feb 19 21:19:13 crc kubenswrapper[4886]: E0219 21:19:13.423664 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06257e23-2243-4646-a2fb-95b947d5c466" containerName="mariadb-database-create" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.423680 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="06257e23-2243-4646-a2fb-95b947d5c466" containerName="mariadb-database-create" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.423919 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="06257e23-2243-4646-a2fb-95b947d5c466" containerName="mariadb-database-create" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.424664 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-vmhgw" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.431615 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vmhgw"] Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.478503 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62ce6552-da24-4a92-9474-47b352bd969e-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-vmhgw\" (UID: \"62ce6552-da24-4a92-9474-47b352bd969e\") " pod="openstack/mysqld-exporter-openstack-db-create-vmhgw" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.478628 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb8dq\" (UniqueName: \"kubernetes.io/projected/62ce6552-da24-4a92-9474-47b352bd969e-kube-api-access-wb8dq\") pod \"mysqld-exporter-openstack-db-create-vmhgw\" (UID: \"62ce6552-da24-4a92-9474-47b352bd969e\") " pod="openstack/mysqld-exporter-openstack-db-create-vmhgw" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.478749 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfhlb\" (UniqueName: \"kubernetes.io/projected/06257e23-2243-4646-a2fb-95b947d5c466-kube-api-access-kfhlb\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.577118 4886 generic.go:334] "Generic (PLEG): container finished" podID="240c4666-ec12-4498-9f53-dd95d7a33ed4" containerID="2bab4fa65f6ff951fc44be98722e578c8fb52c4fce42310c55441a42e14299d5" exitCode=0 Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.577208 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e29c-account-create-update-2897c" event={"ID":"240c4666-ec12-4498-9f53-dd95d7a33ed4","Type":"ContainerDied","Data":"2bab4fa65f6ff951fc44be98722e578c8fb52c4fce42310c55441a42e14299d5"} Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.578614 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wkj8t" event={"ID":"9608b54a-4daa-4b61-8be4-30a5f155512d","Type":"ContainerStarted","Data":"0eadaff11187afcffdf071ac7974ee97f9fdb8ffe604f4579446bd4a1bfbb493"} Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.580744 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62ce6552-da24-4a92-9474-47b352bd969e-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-vmhgw\" (UID: \"62ce6552-da24-4a92-9474-47b352bd969e\") " pod="openstack/mysqld-exporter-openstack-db-create-vmhgw" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.580898 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb8dq\" (UniqueName: \"kubernetes.io/projected/62ce6552-da24-4a92-9474-47b352bd969e-kube-api-access-wb8dq\") pod \"mysqld-exporter-openstack-db-create-vmhgw\" (UID: \"62ce6552-da24-4a92-9474-47b352bd969e\") " pod="openstack/mysqld-exporter-openstack-db-create-vmhgw" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.581957 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62ce6552-da24-4a92-9474-47b352bd969e-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-vmhgw\" (UID: \"62ce6552-da24-4a92-9474-47b352bd969e\") " pod="openstack/mysqld-exporter-openstack-db-create-vmhgw" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.584208 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cvm86" event={"ID":"06257e23-2243-4646-a2fb-95b947d5c466","Type":"ContainerDied","Data":"432aa2c319eacaef65b8f3fcedc53f83df6844cd5aab54dd026667b9e9de5385"} Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.584247 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="432aa2c319eacaef65b8f3fcedc53f83df6844cd5aab54dd026667b9e9de5385" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.584255 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cvm86" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.599774 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb8dq\" (UniqueName: \"kubernetes.io/projected/62ce6552-da24-4a92-9474-47b352bd969e-kube-api-access-wb8dq\") pod \"mysqld-exporter-openstack-db-create-vmhgw\" (UID: \"62ce6552-da24-4a92-9474-47b352bd969e\") " pod="openstack/mysqld-exporter-openstack-db-create-vmhgw" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.664783 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-d082-account-create-update-2h5pl"] Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.666443 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d082-account-create-update-2h5pl" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.667901 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.675829 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-d082-account-create-update-2h5pl"] Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.789150 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvwdb\" (UniqueName: \"kubernetes.io/projected/4409a578-5632-41e1-bcb2-015deecc0e1a-kube-api-access-xvwdb\") pod \"mysqld-exporter-d082-account-create-update-2h5pl\" (UID: \"4409a578-5632-41e1-bcb2-015deecc0e1a\") " pod="openstack/mysqld-exporter-d082-account-create-update-2h5pl" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.789224 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4409a578-5632-41e1-bcb2-015deecc0e1a-operator-scripts\") pod \"mysqld-exporter-d082-account-create-update-2h5pl\" (UID: \"4409a578-5632-41e1-bcb2-015deecc0e1a\") " pod="openstack/mysqld-exporter-d082-account-create-update-2h5pl" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.793888 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-vmhgw" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.825928 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-79dd89cd97-kwpj9_622977a2-44e6-4860-9f42-45619a022ed3/console/0.log" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.826090 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.893576 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/622977a2-44e6-4860-9f42-45619a022ed3-service-ca\") pod \"622977a2-44e6-4860-9f42-45619a022ed3\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.893782 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/622977a2-44e6-4860-9f42-45619a022ed3-trusted-ca-bundle\") pod \"622977a2-44e6-4860-9f42-45619a022ed3\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.893855 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/622977a2-44e6-4860-9f42-45619a022ed3-console-serving-cert\") pod \"622977a2-44e6-4860-9f42-45619a022ed3\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.893922 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/622977a2-44e6-4860-9f42-45619a022ed3-console-oauth-config\") pod \"622977a2-44e6-4860-9f42-45619a022ed3\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.893985 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw9g9\" (UniqueName: \"kubernetes.io/projected/622977a2-44e6-4860-9f42-45619a022ed3-kube-api-access-mw9g9\") pod \"622977a2-44e6-4860-9f42-45619a022ed3\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.894033 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/622977a2-44e6-4860-9f42-45619a022ed3-oauth-serving-cert\") pod \"622977a2-44e6-4860-9f42-45619a022ed3\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.894070 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/622977a2-44e6-4860-9f42-45619a022ed3-console-config\") pod \"622977a2-44e6-4860-9f42-45619a022ed3\" (UID: \"622977a2-44e6-4860-9f42-45619a022ed3\") " Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.898641 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622977a2-44e6-4860-9f42-45619a022ed3-service-ca" (OuterVolumeSpecName: "service-ca") pod "622977a2-44e6-4860-9f42-45619a022ed3" (UID: "622977a2-44e6-4860-9f42-45619a022ed3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.899046 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwdb\" (UniqueName: \"kubernetes.io/projected/4409a578-5632-41e1-bcb2-015deecc0e1a-kube-api-access-xvwdb\") pod \"mysqld-exporter-d082-account-create-update-2h5pl\" (UID: \"4409a578-5632-41e1-bcb2-015deecc0e1a\") " pod="openstack/mysqld-exporter-d082-account-create-update-2h5pl" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.899165 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622977a2-44e6-4860-9f42-45619a022ed3-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "622977a2-44e6-4860-9f42-45619a022ed3" (UID: "622977a2-44e6-4860-9f42-45619a022ed3"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.899325 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4409a578-5632-41e1-bcb2-015deecc0e1a-operator-scripts\") pod \"mysqld-exporter-d082-account-create-update-2h5pl\" (UID: \"4409a578-5632-41e1-bcb2-015deecc0e1a\") " pod="openstack/mysqld-exporter-d082-account-create-update-2h5pl" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.899562 4886 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/622977a2-44e6-4860-9f42-45619a022ed3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.899581 4886 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/622977a2-44e6-4860-9f42-45619a022ed3-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.900141 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622977a2-44e6-4860-9f42-45619a022ed3-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "622977a2-44e6-4860-9f42-45619a022ed3" (UID: "622977a2-44e6-4860-9f42-45619a022ed3"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.900519 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4409a578-5632-41e1-bcb2-015deecc0e1a-operator-scripts\") pod \"mysqld-exporter-d082-account-create-update-2h5pl\" (UID: \"4409a578-5632-41e1-bcb2-015deecc0e1a\") " pod="openstack/mysqld-exporter-d082-account-create-update-2h5pl" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.900692 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/622977a2-44e6-4860-9f42-45619a022ed3-kube-api-access-mw9g9" (OuterVolumeSpecName: "kube-api-access-mw9g9") pod "622977a2-44e6-4860-9f42-45619a022ed3" (UID: "622977a2-44e6-4860-9f42-45619a022ed3"). InnerVolumeSpecName "kube-api-access-mw9g9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.900741 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622977a2-44e6-4860-9f42-45619a022ed3-console-config" (OuterVolumeSpecName: "console-config") pod "622977a2-44e6-4860-9f42-45619a022ed3" (UID: "622977a2-44e6-4860-9f42-45619a022ed3"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.902800 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/622977a2-44e6-4860-9f42-45619a022ed3-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "622977a2-44e6-4860-9f42-45619a022ed3" (UID: "622977a2-44e6-4860-9f42-45619a022ed3"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.907319 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/622977a2-44e6-4860-9f42-45619a022ed3-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "622977a2-44e6-4860-9f42-45619a022ed3" (UID: "622977a2-44e6-4860-9f42-45619a022ed3"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:19:13 crc kubenswrapper[4886]: I0219 21:19:13.919862 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvwdb\" (UniqueName: \"kubernetes.io/projected/4409a578-5632-41e1-bcb2-015deecc0e1a-kube-api-access-xvwdb\") pod \"mysqld-exporter-d082-account-create-update-2h5pl\" (UID: \"4409a578-5632-41e1-bcb2-015deecc0e1a\") " pod="openstack/mysqld-exporter-d082-account-create-update-2h5pl" Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.005037 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/622977a2-44e6-4860-9f42-45619a022ed3-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.005072 4886 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/622977a2-44e6-4860-9f42-45619a022ed3-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.005082 4886 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/622977a2-44e6-4860-9f42-45619a022ed3-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.005107 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw9g9\" (UniqueName: \"kubernetes.io/projected/622977a2-44e6-4860-9f42-45619a022ed3-kube-api-access-mw9g9\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.005118 4886 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/622977a2-44e6-4860-9f42-45619a022ed3-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.066717 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-k78l4"] Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.083687 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-588e-account-create-update-tkjhl"] Feb 19 21:19:14 crc kubenswrapper[4886]: W0219 21:19:14.083953 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c42c45f_c36c_45f6_93dc_8d8fa0ed69ce.slice/crio-21aa920083d0750c1c8cfbbc198d5933401e75d9e38c15cf25bed5d682da3114 WatchSource:0}: Error finding container 21aa920083d0750c1c8cfbbc198d5933401e75d9e38c15cf25bed5d682da3114: Status 404 returned error can't find the container with id 21aa920083d0750c1c8cfbbc198d5933401e75d9e38c15cf25bed5d682da3114 Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.095722 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5lpl5"] Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.107326 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e0b5-account-create-update-whzj7"] Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.119049 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d082-account-create-update-2h5pl" Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.151466 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-w68nz" Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.231104 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qlf2w"] Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.231410 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-qlf2w" podUID="d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81" containerName="dnsmasq-dns" containerID="cri-o://f06e4623bc1db0d058a9a4b676f9523dc6f0f47b1f59227bce81fc00fa949c27" gracePeriod=10 Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.324931 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vmhgw"] Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.593981 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-79dd89cd97-kwpj9_622977a2-44e6-4860-9f42-45619a022ed3/console/0.log" Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.594434 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79dd89cd97-kwpj9" event={"ID":"622977a2-44e6-4860-9f42-45619a022ed3","Type":"ContainerDied","Data":"cdc815f6d4331f525e96df28fbcabfc27ea4ea2cb5e8036aed33132bf90c85c6"} Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.594479 4886 scope.go:117] "RemoveContainer" containerID="bfa0f3e0d485a92a46084ed6c959f73fe9a82d5f278ad39a12b4d2f49ca35a2a" Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.594631 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79dd89cd97-kwpj9" Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.604233 4886 generic.go:334] "Generic (PLEG): container finished" podID="d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81" containerID="f06e4623bc1db0d058a9a4b676f9523dc6f0f47b1f59227bce81fc00fa949c27" exitCode=0 Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.611165 4886 generic.go:334] "Generic (PLEG): container finished" podID="03736bf5-9857-4ca1-b9f8-6474cfefb230" containerID="4e8922dfb830e30a42ef115e0427951e4c6a6b6e81ca7dde9190231ea706411c" exitCode=0 Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.623271 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qlf2w" event={"ID":"d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81","Type":"ContainerDied","Data":"f06e4623bc1db0d058a9a4b676f9523dc6f0f47b1f59227bce81fc00fa949c27"} Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.623324 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-k78l4" event={"ID":"8f78fcbc-7dfd-45f5-8d6a-ce814efb3303","Type":"ContainerStarted","Data":"94e84cdc1a550877f49efa7d355db2c29b46519522ff0e0605a24c1be9aca8f9"} Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.623366 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-k78l4" event={"ID":"8f78fcbc-7dfd-45f5-8d6a-ce814efb3303","Type":"ContainerStarted","Data":"535d3407da9e717a681159c7db63a9f8bf8ac8dac7efcfbfebff8713a391c63c"} Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.623378 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hbm8q" event={"ID":"03736bf5-9857-4ca1-b9f8-6474cfefb230","Type":"ContainerDied","Data":"4e8922dfb830e30a42ef115e0427951e4c6a6b6e81ca7dde9190231ea706411c"} Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.623393 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5lpl5" event={"ID":"d1e6646f-4bdd-478f-a451-a16e2bfc2c08","Type":"ContainerStarted","Data":"4c0556fa9b76c3dee2f8616d64726197ee9162bfdd63c1ead9f3fc32d4b35ff4"} Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.623404 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e0b5-account-create-update-whzj7" event={"ID":"3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce","Type":"ContainerStarted","Data":"21aa920083d0750c1c8cfbbc198d5933401e75d9e38c15cf25bed5d682da3114"} Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.623417 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-588e-account-create-update-tkjhl" event={"ID":"adccc6e1-c1da-45d2-aea8-8a381b733fef","Type":"ContainerStarted","Data":"62e6f919c061cea4a3c4da82ab72a38475ac6e9bc5f5bf9970aa433729e5a49c"} Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.623430 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-588e-account-create-update-tkjhl" event={"ID":"adccc6e1-c1da-45d2-aea8-8a381b733fef","Type":"ContainerStarted","Data":"15dbae59045d8c8bdfb7f180850711275108625f1a3ceb94c14068a2029016b5"} Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.627987 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-k78l4" podStartSLOduration=2.627970189 podStartE2EDuration="2.627970189s" podCreationTimestamp="2026-02-19 21:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:19:14.623838095 +0000 UTC m=+1185.251681145" watchObservedRunningTime="2026-02-19 21:19:14.627970189 +0000 UTC m=+1185.255813239" Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.651626 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-588e-account-create-update-tkjhl" podStartSLOduration=2.6516095589999997 podStartE2EDuration="2.651609559s" podCreationTimestamp="2026-02-19 21:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:19:14.645954578 +0000 UTC m=+1185.273797638" watchObservedRunningTime="2026-02-19 21:19:14.651609559 +0000 UTC m=+1185.279452609" Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.685062 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-79dd89cd97-kwpj9"] Feb 19 21:19:14 crc kubenswrapper[4886]: I0219 21:19:14.695357 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-79dd89cd97-kwpj9"] Feb 19 21:19:15 crc kubenswrapper[4886]: W0219 21:19:15.173307 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62ce6552_da24_4a92_9474_47b352bd969e.slice/crio-e51f37d5b767b746e68990786500f0742be9c0a4376084e193853a93373d7fef WatchSource:0}: Error finding container e51f37d5b767b746e68990786500f0742be9c0a4376084e193853a93373d7fef: Status 404 returned error can't find the container with id e51f37d5b767b746e68990786500f0742be9c0a4376084e193853a93373d7fef Feb 19 21:19:15 crc kubenswrapper[4886]: I0219 21:19:15.451425 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e29c-account-create-update-2897c" Feb 19 21:19:15 crc kubenswrapper[4886]: I0219 21:19:15.564109 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2dvf\" (UniqueName: \"kubernetes.io/projected/240c4666-ec12-4498-9f53-dd95d7a33ed4-kube-api-access-k2dvf\") pod \"240c4666-ec12-4498-9f53-dd95d7a33ed4\" (UID: \"240c4666-ec12-4498-9f53-dd95d7a33ed4\") " Feb 19 21:19:15 crc kubenswrapper[4886]: I0219 21:19:15.564530 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/240c4666-ec12-4498-9f53-dd95d7a33ed4-operator-scripts\") pod \"240c4666-ec12-4498-9f53-dd95d7a33ed4\" (UID: \"240c4666-ec12-4498-9f53-dd95d7a33ed4\") " Feb 19 21:19:15 crc kubenswrapper[4886]: I0219 21:19:15.565753 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/240c4666-ec12-4498-9f53-dd95d7a33ed4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "240c4666-ec12-4498-9f53-dd95d7a33ed4" (UID: "240c4666-ec12-4498-9f53-dd95d7a33ed4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:15 crc kubenswrapper[4886]: I0219 21:19:15.574089 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/240c4666-ec12-4498-9f53-dd95d7a33ed4-kube-api-access-k2dvf" (OuterVolumeSpecName: "kube-api-access-k2dvf") pod "240c4666-ec12-4498-9f53-dd95d7a33ed4" (UID: "240c4666-ec12-4498-9f53-dd95d7a33ed4"). InnerVolumeSpecName "kube-api-access-k2dvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:19:15 crc kubenswrapper[4886]: I0219 21:19:15.668654 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/240c4666-ec12-4498-9f53-dd95d7a33ed4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:15 crc kubenswrapper[4886]: I0219 21:19:15.668836 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2dvf\" (UniqueName: \"kubernetes.io/projected/240c4666-ec12-4498-9f53-dd95d7a33ed4-kube-api-access-k2dvf\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:15 crc kubenswrapper[4886]: I0219 21:19:15.708985 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-d082-account-create-update-2h5pl"] Feb 19 21:19:15 crc kubenswrapper[4886]: I0219 21:19:15.788629 4886 generic.go:334] "Generic (PLEG): container finished" podID="8f78fcbc-7dfd-45f5-8d6a-ce814efb3303" containerID="94e84cdc1a550877f49efa7d355db2c29b46519522ff0e0605a24c1be9aca8f9" exitCode=0 Feb 19 21:19:15 crc kubenswrapper[4886]: I0219 21:19:15.788685 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-k78l4" event={"ID":"8f78fcbc-7dfd-45f5-8d6a-ce814efb3303","Type":"ContainerDied","Data":"94e84cdc1a550877f49efa7d355db2c29b46519522ff0e0605a24c1be9aca8f9"} Feb 19 21:19:15 crc kubenswrapper[4886]: W0219 21:19:15.789089 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4409a578_5632_41e1_bcb2_015deecc0e1a.slice/crio-61d40833a5b0c4d3690f2cfff388a51b85bbb3d65e158efc4948ed61778e2082 WatchSource:0}: Error finding container 61d40833a5b0c4d3690f2cfff388a51b85bbb3d65e158efc4948ed61778e2082: Status 404 returned error can't find the container with id 61d40833a5b0c4d3690f2cfff388a51b85bbb3d65e158efc4948ed61778e2082 Feb 19 21:19:15 crc kubenswrapper[4886]: I0219 21:19:15.819962 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-vmhgw" event={"ID":"62ce6552-da24-4a92-9474-47b352bd969e","Type":"ContainerStarted","Data":"e51f37d5b767b746e68990786500f0742be9c0a4376084e193853a93373d7fef"} Feb 19 21:19:15 crc kubenswrapper[4886]: I0219 21:19:15.825168 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e29c-account-create-update-2897c" Feb 19 21:19:15 crc kubenswrapper[4886]: I0219 21:19:15.825596 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e29c-account-create-update-2897c" event={"ID":"240c4666-ec12-4498-9f53-dd95d7a33ed4","Type":"ContainerDied","Data":"8e048fa7d0b81c0a94044b80b2c89a5f0a87c1e255c14a736f7f46e6dbd1915d"} Feb 19 21:19:15 crc kubenswrapper[4886]: I0219 21:19:15.825650 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e048fa7d0b81c0a94044b80b2c89a5f0a87c1e255c14a736f7f46e6dbd1915d" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.013972 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qlf2w" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.080140 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljznl\" (UniqueName: \"kubernetes.io/projected/d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81-kube-api-access-ljznl\") pod \"d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81\" (UID: \"d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81\") " Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.080618 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81-config\") pod \"d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81\" (UID: \"d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81\") " Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.080709 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81-dns-svc\") pod \"d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81\" (UID: \"d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81\") " Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.085611 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81-kube-api-access-ljznl" (OuterVolumeSpecName: "kube-api-access-ljznl") pod "d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81" (UID: "d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81"). InnerVolumeSpecName "kube-api-access-ljznl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.153598 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81-config" (OuterVolumeSpecName: "config") pod "d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81" (UID: "d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.187218 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljznl\" (UniqueName: \"kubernetes.io/projected/d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81-kube-api-access-ljznl\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.187328 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.203057 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81" (UID: "d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.282911 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hbm8q" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.291597 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.359242 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.392796 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkjwz\" (UniqueName: \"kubernetes.io/projected/03736bf5-9857-4ca1-b9f8-6474cfefb230-kube-api-access-gkjwz\") pod \"03736bf5-9857-4ca1-b9f8-6474cfefb230\" (UID: \"03736bf5-9857-4ca1-b9f8-6474cfefb230\") " Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.393065 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03736bf5-9857-4ca1-b9f8-6474cfefb230-operator-scripts\") pod \"03736bf5-9857-4ca1-b9f8-6474cfefb230\" (UID: \"03736bf5-9857-4ca1-b9f8-6474cfefb230\") " Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.393628 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03736bf5-9857-4ca1-b9f8-6474cfefb230-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "03736bf5-9857-4ca1-b9f8-6474cfefb230" (UID: "03736bf5-9857-4ca1-b9f8-6474cfefb230"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.394056 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03736bf5-9857-4ca1-b9f8-6474cfefb230-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.399554 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03736bf5-9857-4ca1-b9f8-6474cfefb230-kube-api-access-gkjwz" (OuterVolumeSpecName: "kube-api-access-gkjwz") pod "03736bf5-9857-4ca1-b9f8-6474cfefb230" (UID: "03736bf5-9857-4ca1-b9f8-6474cfefb230"). InnerVolumeSpecName "kube-api-access-gkjwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.487507 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-wfngr"] Feb 19 21:19:16 crc kubenswrapper[4886]: E0219 21:19:16.488052 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03736bf5-9857-4ca1-b9f8-6474cfefb230" containerName="mariadb-account-create-update" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.488071 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="03736bf5-9857-4ca1-b9f8-6474cfefb230" containerName="mariadb-account-create-update" Feb 19 21:19:16 crc kubenswrapper[4886]: E0219 21:19:16.488111 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81" containerName="dnsmasq-dns" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.488119 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81" containerName="dnsmasq-dns" Feb 19 21:19:16 crc kubenswrapper[4886]: E0219 21:19:16.488137 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="622977a2-44e6-4860-9f42-45619a022ed3" containerName="console" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.488145 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="622977a2-44e6-4860-9f42-45619a022ed3" containerName="console" Feb 19 21:19:16 crc kubenswrapper[4886]: E0219 21:19:16.488161 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="240c4666-ec12-4498-9f53-dd95d7a33ed4" containerName="mariadb-account-create-update" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.488170 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="240c4666-ec12-4498-9f53-dd95d7a33ed4" containerName="mariadb-account-create-update" Feb 19 21:19:16 crc kubenswrapper[4886]: E0219 21:19:16.488186 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81" containerName="init" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.488194 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81" containerName="init" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.488436 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="240c4666-ec12-4498-9f53-dd95d7a33ed4" containerName="mariadb-account-create-update" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.488457 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81" containerName="dnsmasq-dns" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.488472 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="622977a2-44e6-4860-9f42-45619a022ed3" containerName="console" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.488498 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="03736bf5-9857-4ca1-b9f8-6474cfefb230" containerName="mariadb-account-create-update" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.489408 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wfngr" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.491597 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jvnwk" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.493223 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.495894 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkjwz\" (UniqueName: \"kubernetes.io/projected/03736bf5-9857-4ca1-b9f8-6474cfefb230-kube-api-access-gkjwz\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.499740 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wfngr"] Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.597689 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f652b0b8-6eee-4ebb-b0ad-e22de89080a6-config-data\") pod \"glance-db-sync-wfngr\" (UID: \"f652b0b8-6eee-4ebb-b0ad-e22de89080a6\") " pod="openstack/glance-db-sync-wfngr" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.597790 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f652b0b8-6eee-4ebb-b0ad-e22de89080a6-combined-ca-bundle\") pod \"glance-db-sync-wfngr\" (UID: \"f652b0b8-6eee-4ebb-b0ad-e22de89080a6\") " pod="openstack/glance-db-sync-wfngr" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.597920 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f652b0b8-6eee-4ebb-b0ad-e22de89080a6-db-sync-config-data\") pod \"glance-db-sync-wfngr\" (UID: \"f652b0b8-6eee-4ebb-b0ad-e22de89080a6\") " pod="openstack/glance-db-sync-wfngr" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.597948 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4r8j\" (UniqueName: \"kubernetes.io/projected/f652b0b8-6eee-4ebb-b0ad-e22de89080a6-kube-api-access-x4r8j\") pod \"glance-db-sync-wfngr\" (UID: \"f652b0b8-6eee-4ebb-b0ad-e22de89080a6\") " pod="openstack/glance-db-sync-wfngr" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.617408 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="622977a2-44e6-4860-9f42-45619a022ed3" path="/var/lib/kubelet/pods/622977a2-44e6-4860-9f42-45619a022ed3/volumes" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.702120 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f652b0b8-6eee-4ebb-b0ad-e22de89080a6-db-sync-config-data\") pod \"glance-db-sync-wfngr\" (UID: \"f652b0b8-6eee-4ebb-b0ad-e22de89080a6\") " pod="openstack/glance-db-sync-wfngr" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.702176 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4r8j\" (UniqueName: \"kubernetes.io/projected/f652b0b8-6eee-4ebb-b0ad-e22de89080a6-kube-api-access-x4r8j\") pod \"glance-db-sync-wfngr\" (UID: \"f652b0b8-6eee-4ebb-b0ad-e22de89080a6\") " pod="openstack/glance-db-sync-wfngr" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.702325 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f652b0b8-6eee-4ebb-b0ad-e22de89080a6-config-data\") pod \"glance-db-sync-wfngr\" (UID: \"f652b0b8-6eee-4ebb-b0ad-e22de89080a6\") " pod="openstack/glance-db-sync-wfngr" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.702430 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f652b0b8-6eee-4ebb-b0ad-e22de89080a6-combined-ca-bundle\") pod \"glance-db-sync-wfngr\" (UID: \"f652b0b8-6eee-4ebb-b0ad-e22de89080a6\") " pod="openstack/glance-db-sync-wfngr" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.725487 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f652b0b8-6eee-4ebb-b0ad-e22de89080a6-combined-ca-bundle\") pod \"glance-db-sync-wfngr\" (UID: \"f652b0b8-6eee-4ebb-b0ad-e22de89080a6\") " pod="openstack/glance-db-sync-wfngr" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.725895 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f652b0b8-6eee-4ebb-b0ad-e22de89080a6-db-sync-config-data\") pod \"glance-db-sync-wfngr\" (UID: \"f652b0b8-6eee-4ebb-b0ad-e22de89080a6\") " pod="openstack/glance-db-sync-wfngr" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.726096 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4r8j\" (UniqueName: \"kubernetes.io/projected/f652b0b8-6eee-4ebb-b0ad-e22de89080a6-kube-api-access-x4r8j\") pod \"glance-db-sync-wfngr\" (UID: \"f652b0b8-6eee-4ebb-b0ad-e22de89080a6\") " pod="openstack/glance-db-sync-wfngr" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.726242 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f652b0b8-6eee-4ebb-b0ad-e22de89080a6-config-data\") pod \"glance-db-sync-wfngr\" (UID: \"f652b0b8-6eee-4ebb-b0ad-e22de89080a6\") " pod="openstack/glance-db-sync-wfngr" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.824903 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wfngr" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.841978 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8cccf7aa-d200-47d7-96d4-cf6b048f966e","Type":"ContainerStarted","Data":"d0b3590f64c180736d21b24d642ec1bda0d86e2e15e7ade3315d60658dac0252"} Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.844659 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qlf2w" event={"ID":"d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81","Type":"ContainerDied","Data":"fe287496c8a5ae5d5dc281554882b98073621e72b6ae1b29a5ab239f3b9bf857"} Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.844715 4886 scope.go:117] "RemoveContainer" containerID="f06e4623bc1db0d058a9a4b676f9523dc6f0f47b1f59227bce81fc00fa949c27" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.844844 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qlf2w" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.855873 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-d082-account-create-update-2h5pl" event={"ID":"4409a578-5632-41e1-bcb2-015deecc0e1a","Type":"ContainerStarted","Data":"64c49f35dfdcf1596f73b6f7eedf7f760735004db15cc5cf577a124ec72c5a7f"} Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.855924 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-d082-account-create-update-2h5pl" event={"ID":"4409a578-5632-41e1-bcb2-015deecc0e1a","Type":"ContainerStarted","Data":"61d40833a5b0c4d3690f2cfff388a51b85bbb3d65e158efc4948ed61778e2082"} Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.874117 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qlf2w"] Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.876230 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hbm8q" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.876216 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hbm8q" event={"ID":"03736bf5-9857-4ca1-b9f8-6474cfefb230","Type":"ContainerDied","Data":"a8f200df453b7eb48d1e8e37f09666111d0c0586a449acd00df6a843757f5620"} Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.879101 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8f200df453b7eb48d1e8e37f09666111d0c0586a449acd00df6a843757f5620" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.880679 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5lpl5" event={"ID":"d1e6646f-4bdd-478f-a451-a16e2bfc2c08","Type":"ContainerStarted","Data":"d4309ae7e0febe123227ddf9989775052dd4e03144fb3b9cff3f22d8b30ef772"} Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.883692 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e0b5-account-create-update-whzj7" event={"ID":"3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce","Type":"ContainerStarted","Data":"fce7bee970b236d413937a2161a01b78eeb3004e421cc546a5102e1dd9969645"} Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.886292 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-vmhgw" event={"ID":"62ce6552-da24-4a92-9474-47b352bd969e","Type":"ContainerStarted","Data":"f436bb6b015adf9ee0f489cbd0a473b2e34e58fd2b28b67254f4ce23cc7d3eb0"} Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.888025 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qlf2w"] Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.889411 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-d082-account-create-update-2h5pl" podStartSLOduration=3.889387401 podStartE2EDuration="3.889387401s" podCreationTimestamp="2026-02-19 21:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:19:16.879566645 +0000 UTC m=+1187.507409695" watchObservedRunningTime="2026-02-19 21:19:16.889387401 +0000 UTC m=+1187.517230451" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.899059 4886 scope.go:117] "RemoveContainer" containerID="28d34acfdd74b2a2a419d54937bf0b9febcc2e5c349f7f07a18d7691616cbb1e" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.902948 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-5lpl5" podStartSLOduration=4.902932849 podStartE2EDuration="4.902932849s" podCreationTimestamp="2026-02-19 21:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:19:16.892232252 +0000 UTC m=+1187.520075302" watchObservedRunningTime="2026-02-19 21:19:16.902932849 +0000 UTC m=+1187.530775899" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.928967 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-e0b5-account-create-update-whzj7" podStartSLOduration=4.928949809 podStartE2EDuration="4.928949809s" podCreationTimestamp="2026-02-19 21:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:19:16.91339772 +0000 UTC m=+1187.541240770" watchObservedRunningTime="2026-02-19 21:19:16.928949809 +0000 UTC m=+1187.556792859" Feb 19 21:19:16 crc kubenswrapper[4886]: I0219 21:19:16.959826 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-vmhgw" podStartSLOduration=3.95980921 podStartE2EDuration="3.95980921s" podCreationTimestamp="2026-02-19 21:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:19:16.928428046 +0000 UTC m=+1187.556271096" watchObservedRunningTime="2026-02-19 21:19:16.95980921 +0000 UTC m=+1187.587652260" Feb 19 21:19:17 crc kubenswrapper[4886]: I0219 21:19:17.570672 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wfngr"] Feb 19 21:19:17 crc kubenswrapper[4886]: I0219 21:19:17.899553 4886 generic.go:334] "Generic (PLEG): container finished" podID="62ce6552-da24-4a92-9474-47b352bd969e" containerID="f436bb6b015adf9ee0f489cbd0a473b2e34e58fd2b28b67254f4ce23cc7d3eb0" exitCode=0 Feb 19 21:19:17 crc kubenswrapper[4886]: I0219 21:19:17.899613 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-vmhgw" event={"ID":"62ce6552-da24-4a92-9474-47b352bd969e","Type":"ContainerDied","Data":"f436bb6b015adf9ee0f489cbd0a473b2e34e58fd2b28b67254f4ce23cc7d3eb0"} Feb 19 21:19:17 crc kubenswrapper[4886]: I0219 21:19:17.902232 4886 generic.go:334] "Generic (PLEG): container finished" podID="adccc6e1-c1da-45d2-aea8-8a381b733fef" containerID="62e6f919c061cea4a3c4da82ab72a38475ac6e9bc5f5bf9970aa433729e5a49c" exitCode=0 Feb 19 21:19:17 crc kubenswrapper[4886]: I0219 21:19:17.902329 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-588e-account-create-update-tkjhl" event={"ID":"adccc6e1-c1da-45d2-aea8-8a381b733fef","Type":"ContainerDied","Data":"62e6f919c061cea4a3c4da82ab72a38475ac6e9bc5f5bf9970aa433729e5a49c"} Feb 19 21:19:17 crc kubenswrapper[4886]: I0219 21:19:17.906743 4886 generic.go:334] "Generic (PLEG): container finished" podID="d1e6646f-4bdd-478f-a451-a16e2bfc2c08" containerID="d4309ae7e0febe123227ddf9989775052dd4e03144fb3b9cff3f22d8b30ef772" exitCode=0 Feb 19 21:19:17 crc kubenswrapper[4886]: I0219 21:19:17.907592 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5lpl5" event={"ID":"d1e6646f-4bdd-478f-a451-a16e2bfc2c08","Type":"ContainerDied","Data":"d4309ae7e0febe123227ddf9989775052dd4e03144fb3b9cff3f22d8b30ef772"} Feb 19 21:19:18 crc kubenswrapper[4886]: I0219 21:19:18.613813 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81" path="/var/lib/kubelet/pods/d7bc88d0-a6c8-4872-8fb1-2f51d11b2a81/volumes" Feb 19 21:19:18 crc kubenswrapper[4886]: I0219 21:19:18.932193 4886 generic.go:334] "Generic (PLEG): container finished" podID="3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce" containerID="fce7bee970b236d413937a2161a01b78eeb3004e421cc546a5102e1dd9969645" exitCode=0 Feb 19 21:19:18 crc kubenswrapper[4886]: I0219 21:19:18.932322 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e0b5-account-create-update-whzj7" event={"ID":"3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce","Type":"ContainerDied","Data":"fce7bee970b236d413937a2161a01b78eeb3004e421cc546a5102e1dd9969645"} Feb 19 21:19:19 crc kubenswrapper[4886]: I0219 21:19:19.456437 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hbm8q"] Feb 19 21:19:19 crc kubenswrapper[4886]: I0219 21:19:19.476476 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hbm8q"] Feb 19 21:19:19 crc kubenswrapper[4886]: I0219 21:19:19.943396 4886 generic.go:334] "Generic (PLEG): container finished" podID="4409a578-5632-41e1-bcb2-015deecc0e1a" containerID="64c49f35dfdcf1596f73b6f7eedf7f760735004db15cc5cf577a124ec72c5a7f" exitCode=0 Feb 19 21:19:19 crc kubenswrapper[4886]: I0219 21:19:19.943495 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-d082-account-create-update-2h5pl" event={"ID":"4409a578-5632-41e1-bcb2-015deecc0e1a","Type":"ContainerDied","Data":"64c49f35dfdcf1596f73b6f7eedf7f760735004db15cc5cf577a124ec72c5a7f"} Feb 19 21:19:20 crc kubenswrapper[4886]: I0219 21:19:20.615560 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03736bf5-9857-4ca1-b9f8-6474cfefb230" path="/var/lib/kubelet/pods/03736bf5-9857-4ca1-b9f8-6474cfefb230/volumes" Feb 19 21:19:20 crc kubenswrapper[4886]: I0219 21:19:20.797068 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/da7809cc-c661-4b8c-ab78-4f87229b18d1-etc-swift\") pod \"swift-storage-0\" (UID: \"da7809cc-c661-4b8c-ab78-4f87229b18d1\") " pod="openstack/swift-storage-0" Feb 19 21:19:20 crc kubenswrapper[4886]: E0219 21:19:20.797581 4886 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 21:19:20 crc kubenswrapper[4886]: E0219 21:19:20.797595 4886 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 21:19:20 crc kubenswrapper[4886]: E0219 21:19:20.797662 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/da7809cc-c661-4b8c-ab78-4f87229b18d1-etc-swift podName:da7809cc-c661-4b8c-ab78-4f87229b18d1 nodeName:}" failed. No retries permitted until 2026-02-19 21:19:36.797643043 +0000 UTC m=+1207.425486103 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/da7809cc-c661-4b8c-ab78-4f87229b18d1-etc-swift") pod "swift-storage-0" (UID: "da7809cc-c661-4b8c-ab78-4f87229b18d1") : configmap "swift-ring-files" not found Feb 19 21:19:20 crc kubenswrapper[4886]: I0219 21:19:20.955734 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8cccf7aa-d200-47d7-96d4-cf6b048f966e","Type":"ContainerStarted","Data":"e32970c64c452030d86770e4c7ef38d6c3995f1e790412d8632dbc857d44dece"} Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.222138 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-dz5ch" podUID="270e230a-d5f1-40ff-968c-cd3a77504bf8" containerName="ovn-controller" probeResult="failure" output=< Feb 19 21:19:21 crc kubenswrapper[4886]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 19 21:19:21 crc kubenswrapper[4886]: > Feb 19 21:19:21 crc kubenswrapper[4886]: W0219 21:19:21.305427 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf652b0b8_6eee_4ebb_b0ad_e22de89080a6.slice/crio-65fd2daabd6a5a9c69af608c910d8b931ee7e670bed6b14fb6e5204ebc173bea WatchSource:0}: Error finding container 65fd2daabd6a5a9c69af608c910d8b931ee7e670bed6b14fb6e5204ebc173bea: Status 404 returned error can't find the container with id 65fd2daabd6a5a9c69af608c910d8b931ee7e670bed6b14fb6e5204ebc173bea Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.502328 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-588e-account-create-update-tkjhl" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.513191 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8rfn\" (UniqueName: \"kubernetes.io/projected/adccc6e1-c1da-45d2-aea8-8a381b733fef-kube-api-access-w8rfn\") pod \"adccc6e1-c1da-45d2-aea8-8a381b733fef\" (UID: \"adccc6e1-c1da-45d2-aea8-8a381b733fef\") " Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.513315 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adccc6e1-c1da-45d2-aea8-8a381b733fef-operator-scripts\") pod \"adccc6e1-c1da-45d2-aea8-8a381b733fef\" (UID: \"adccc6e1-c1da-45d2-aea8-8a381b733fef\") " Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.515097 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adccc6e1-c1da-45d2-aea8-8a381b733fef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "adccc6e1-c1da-45d2-aea8-8a381b733fef" (UID: "adccc6e1-c1da-45d2-aea8-8a381b733fef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.521611 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adccc6e1-c1da-45d2-aea8-8a381b733fef-kube-api-access-w8rfn" (OuterVolumeSpecName: "kube-api-access-w8rfn") pod "adccc6e1-c1da-45d2-aea8-8a381b733fef" (UID: "adccc6e1-c1da-45d2-aea8-8a381b733fef"). InnerVolumeSpecName "kube-api-access-w8rfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.586782 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-vmhgw" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.594715 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5lpl5" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.617011 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8rfn\" (UniqueName: \"kubernetes.io/projected/adccc6e1-c1da-45d2-aea8-8a381b733fef-kube-api-access-w8rfn\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.617037 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adccc6e1-c1da-45d2-aea8-8a381b733fef-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.620103 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d082-account-create-update-2h5pl" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.638060 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e0b5-account-create-update-whzj7" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.641625 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k78l4" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.718705 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4409a578-5632-41e1-bcb2-015deecc0e1a-operator-scripts\") pod \"4409a578-5632-41e1-bcb2-015deecc0e1a\" (UID: \"4409a578-5632-41e1-bcb2-015deecc0e1a\") " Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.718810 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1e6646f-4bdd-478f-a451-a16e2bfc2c08-operator-scripts\") pod \"d1e6646f-4bdd-478f-a451-a16e2bfc2c08\" (UID: \"d1e6646f-4bdd-478f-a451-a16e2bfc2c08\") " Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.718895 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f78fcbc-7dfd-45f5-8d6a-ce814efb3303-operator-scripts\") pod \"8f78fcbc-7dfd-45f5-8d6a-ce814efb3303\" (UID: \"8f78fcbc-7dfd-45f5-8d6a-ce814efb3303\") " Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.718957 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62ce6552-da24-4a92-9474-47b352bd969e-operator-scripts\") pod \"62ce6552-da24-4a92-9474-47b352bd969e\" (UID: \"62ce6552-da24-4a92-9474-47b352bd969e\") " Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.719074 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6mc8\" (UniqueName: \"kubernetes.io/projected/3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce-kube-api-access-p6mc8\") pod \"3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce\" (UID: \"3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce\") " Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.719103 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2tjt\" (UniqueName: \"kubernetes.io/projected/d1e6646f-4bdd-478f-a451-a16e2bfc2c08-kube-api-access-p2tjt\") pod \"d1e6646f-4bdd-478f-a451-a16e2bfc2c08\" (UID: \"d1e6646f-4bdd-478f-a451-a16e2bfc2c08\") " Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.719163 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce-operator-scripts\") pod \"3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce\" (UID: \"3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce\") " Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.719204 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1e6646f-4bdd-478f-a451-a16e2bfc2c08-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d1e6646f-4bdd-478f-a451-a16e2bfc2c08" (UID: "d1e6646f-4bdd-478f-a451-a16e2bfc2c08"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.719214 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb8dq\" (UniqueName: \"kubernetes.io/projected/62ce6552-da24-4a92-9474-47b352bd969e-kube-api-access-wb8dq\") pod \"62ce6552-da24-4a92-9474-47b352bd969e\" (UID: \"62ce6552-da24-4a92-9474-47b352bd969e\") " Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.719311 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f78fcbc-7dfd-45f5-8d6a-ce814efb3303-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f78fcbc-7dfd-45f5-8d6a-ce814efb3303" (UID: "8f78fcbc-7dfd-45f5-8d6a-ce814efb3303"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.719321 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jldgq\" (UniqueName: \"kubernetes.io/projected/8f78fcbc-7dfd-45f5-8d6a-ce814efb3303-kube-api-access-jldgq\") pod \"8f78fcbc-7dfd-45f5-8d6a-ce814efb3303\" (UID: \"8f78fcbc-7dfd-45f5-8d6a-ce814efb3303\") " Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.719381 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvwdb\" (UniqueName: \"kubernetes.io/projected/4409a578-5632-41e1-bcb2-015deecc0e1a-kube-api-access-xvwdb\") pod \"4409a578-5632-41e1-bcb2-015deecc0e1a\" (UID: \"4409a578-5632-41e1-bcb2-015deecc0e1a\") " Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.719689 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62ce6552-da24-4a92-9474-47b352bd969e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "62ce6552-da24-4a92-9474-47b352bd969e" (UID: "62ce6552-da24-4a92-9474-47b352bd969e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.719874 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce" (UID: "3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.720529 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f78fcbc-7dfd-45f5-8d6a-ce814efb3303-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.720544 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62ce6552-da24-4a92-9474-47b352bd969e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.720549 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4409a578-5632-41e1-bcb2-015deecc0e1a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4409a578-5632-41e1-bcb2-015deecc0e1a" (UID: "4409a578-5632-41e1-bcb2-015deecc0e1a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.720554 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.720590 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1e6646f-4bdd-478f-a451-a16e2bfc2c08-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.722550 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1e6646f-4bdd-478f-a451-a16e2bfc2c08-kube-api-access-p2tjt" (OuterVolumeSpecName: "kube-api-access-p2tjt") pod "d1e6646f-4bdd-478f-a451-a16e2bfc2c08" (UID: "d1e6646f-4bdd-478f-a451-a16e2bfc2c08"). InnerVolumeSpecName "kube-api-access-p2tjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.723017 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4409a578-5632-41e1-bcb2-015deecc0e1a-kube-api-access-xvwdb" (OuterVolumeSpecName: "kube-api-access-xvwdb") pod "4409a578-5632-41e1-bcb2-015deecc0e1a" (UID: "4409a578-5632-41e1-bcb2-015deecc0e1a"). InnerVolumeSpecName "kube-api-access-xvwdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.723388 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce-kube-api-access-p6mc8" (OuterVolumeSpecName: "kube-api-access-p6mc8") pod "3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce" (UID: "3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce"). InnerVolumeSpecName "kube-api-access-p6mc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.723442 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f78fcbc-7dfd-45f5-8d6a-ce814efb3303-kube-api-access-jldgq" (OuterVolumeSpecName: "kube-api-access-jldgq") pod "8f78fcbc-7dfd-45f5-8d6a-ce814efb3303" (UID: "8f78fcbc-7dfd-45f5-8d6a-ce814efb3303"). InnerVolumeSpecName "kube-api-access-jldgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.724456 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62ce6552-da24-4a92-9474-47b352bd969e-kube-api-access-wb8dq" (OuterVolumeSpecName: "kube-api-access-wb8dq") pod "62ce6552-da24-4a92-9474-47b352bd969e" (UID: "62ce6552-da24-4a92-9474-47b352bd969e"). InnerVolumeSpecName "kube-api-access-wb8dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.821695 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6mc8\" (UniqueName: \"kubernetes.io/projected/3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce-kube-api-access-p6mc8\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.821865 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2tjt\" (UniqueName: \"kubernetes.io/projected/d1e6646f-4bdd-478f-a451-a16e2bfc2c08-kube-api-access-p2tjt\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.821917 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb8dq\" (UniqueName: \"kubernetes.io/projected/62ce6552-da24-4a92-9474-47b352bd969e-kube-api-access-wb8dq\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.821967 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jldgq\" (UniqueName: \"kubernetes.io/projected/8f78fcbc-7dfd-45f5-8d6a-ce814efb3303-kube-api-access-jldgq\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.822033 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvwdb\" (UniqueName: \"kubernetes.io/projected/4409a578-5632-41e1-bcb2-015deecc0e1a-kube-api-access-xvwdb\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.822109 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4409a578-5632-41e1-bcb2-015deecc0e1a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.966454 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wfngr" event={"ID":"f652b0b8-6eee-4ebb-b0ad-e22de89080a6","Type":"ContainerStarted","Data":"65fd2daabd6a5a9c69af608c910d8b931ee7e670bed6b14fb6e5204ebc173bea"} Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.969636 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5lpl5" event={"ID":"d1e6646f-4bdd-478f-a451-a16e2bfc2c08","Type":"ContainerDied","Data":"4c0556fa9b76c3dee2f8616d64726197ee9162bfdd63c1ead9f3fc32d4b35ff4"} Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.969670 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c0556fa9b76c3dee2f8616d64726197ee9162bfdd63c1ead9f3fc32d4b35ff4" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.969734 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5lpl5" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.984885 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e0b5-account-create-update-whzj7" event={"ID":"3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce","Type":"ContainerDied","Data":"21aa920083d0750c1c8cfbbc198d5933401e75d9e38c15cf25bed5d682da3114"} Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.985053 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21aa920083d0750c1c8cfbbc198d5933401e75d9e38c15cf25bed5d682da3114" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.984947 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e0b5-account-create-update-whzj7" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.988438 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-vmhgw" event={"ID":"62ce6552-da24-4a92-9474-47b352bd969e","Type":"ContainerDied","Data":"e51f37d5b767b746e68990786500f0742be9c0a4376084e193853a93373d7fef"} Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.988532 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e51f37d5b767b746e68990786500f0742be9c0a4376084e193853a93373d7fef" Feb 19 21:19:21 crc kubenswrapper[4886]: I0219 21:19:21.988477 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-vmhgw" Feb 19 21:19:22 crc kubenswrapper[4886]: I0219 21:19:22.002129 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wkj8t" event={"ID":"9608b54a-4daa-4b61-8be4-30a5f155512d","Type":"ContainerStarted","Data":"4da8ac7a980c704c932d24139144bace444d699b30330d567eb125e7178af071"} Feb 19 21:19:22 crc kubenswrapper[4886]: I0219 21:19:22.008991 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-588e-account-create-update-tkjhl" event={"ID":"adccc6e1-c1da-45d2-aea8-8a381b733fef","Type":"ContainerDied","Data":"15dbae59045d8c8bdfb7f180850711275108625f1a3ceb94c14068a2029016b5"} Feb 19 21:19:22 crc kubenswrapper[4886]: I0219 21:19:22.009026 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15dbae59045d8c8bdfb7f180850711275108625f1a3ceb94c14068a2029016b5" Feb 19 21:19:22 crc kubenswrapper[4886]: I0219 21:19:22.009081 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-588e-account-create-update-tkjhl" Feb 19 21:19:22 crc kubenswrapper[4886]: I0219 21:19:22.016621 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-d082-account-create-update-2h5pl" event={"ID":"4409a578-5632-41e1-bcb2-015deecc0e1a","Type":"ContainerDied","Data":"61d40833a5b0c4d3690f2cfff388a51b85bbb3d65e158efc4948ed61778e2082"} Feb 19 21:19:22 crc kubenswrapper[4886]: I0219 21:19:22.017310 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61d40833a5b0c4d3690f2cfff388a51b85bbb3d65e158efc4948ed61778e2082" Feb 19 21:19:22 crc kubenswrapper[4886]: I0219 21:19:22.017539 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d082-account-create-update-2h5pl" Feb 19 21:19:22 crc kubenswrapper[4886]: I0219 21:19:22.021785 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-k78l4" event={"ID":"8f78fcbc-7dfd-45f5-8d6a-ce814efb3303","Type":"ContainerDied","Data":"535d3407da9e717a681159c7db63a9f8bf8ac8dac7efcfbfebff8713a391c63c"} Feb 19 21:19:22 crc kubenswrapper[4886]: I0219 21:19:22.021829 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="535d3407da9e717a681159c7db63a9f8bf8ac8dac7efcfbfebff8713a391c63c" Feb 19 21:19:22 crc kubenswrapper[4886]: I0219 21:19:22.021889 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k78l4" Feb 19 21:19:22 crc kubenswrapper[4886]: I0219 21:19:22.030679 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-wkj8t" podStartSLOduration=5.871553243 podStartE2EDuration="14.030662035s" podCreationTimestamp="2026-02-19 21:19:08 +0000 UTC" firstStartedPulling="2026-02-19 21:19:13.19973004 +0000 UTC m=+1183.827573100" lastFinishedPulling="2026-02-19 21:19:21.358838802 +0000 UTC m=+1191.986681892" observedRunningTime="2026-02-19 21:19:22.03007654 +0000 UTC m=+1192.657919600" watchObservedRunningTime="2026-02-19 21:19:22.030662035 +0000 UTC m=+1192.658505085" Feb 19 21:19:23 crc kubenswrapper[4886]: I0219 21:19:23.871651 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-6w6vv"] Feb 19 21:19:23 crc kubenswrapper[4886]: E0219 21:19:23.872250 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1e6646f-4bdd-478f-a451-a16e2bfc2c08" containerName="mariadb-database-create" Feb 19 21:19:23 crc kubenswrapper[4886]: I0219 21:19:23.872275 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1e6646f-4bdd-478f-a451-a16e2bfc2c08" containerName="mariadb-database-create" Feb 19 21:19:23 crc kubenswrapper[4886]: E0219 21:19:23.872291 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4409a578-5632-41e1-bcb2-015deecc0e1a" containerName="mariadb-account-create-update" Feb 19 21:19:23 crc kubenswrapper[4886]: I0219 21:19:23.872297 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="4409a578-5632-41e1-bcb2-015deecc0e1a" containerName="mariadb-account-create-update" Feb 19 21:19:23 crc kubenswrapper[4886]: E0219 21:19:23.872308 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f78fcbc-7dfd-45f5-8d6a-ce814efb3303" containerName="mariadb-database-create" Feb 19 21:19:23 crc kubenswrapper[4886]: I0219 21:19:23.872314 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f78fcbc-7dfd-45f5-8d6a-ce814efb3303" containerName="mariadb-database-create" Feb 19 21:19:23 crc kubenswrapper[4886]: E0219 21:19:23.872328 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62ce6552-da24-4a92-9474-47b352bd969e" containerName="mariadb-database-create" Feb 19 21:19:23 crc kubenswrapper[4886]: I0219 21:19:23.872334 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ce6552-da24-4a92-9474-47b352bd969e" containerName="mariadb-database-create" Feb 19 21:19:23 crc kubenswrapper[4886]: E0219 21:19:23.872353 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adccc6e1-c1da-45d2-aea8-8a381b733fef" containerName="mariadb-account-create-update" Feb 19 21:19:23 crc kubenswrapper[4886]: I0219 21:19:23.872359 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="adccc6e1-c1da-45d2-aea8-8a381b733fef" containerName="mariadb-account-create-update" Feb 19 21:19:23 crc kubenswrapper[4886]: E0219 21:19:23.872370 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce" containerName="mariadb-account-create-update" Feb 19 21:19:23 crc kubenswrapper[4886]: I0219 21:19:23.872376 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce" containerName="mariadb-account-create-update" Feb 19 21:19:23 crc kubenswrapper[4886]: I0219 21:19:23.872546 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="adccc6e1-c1da-45d2-aea8-8a381b733fef" containerName="mariadb-account-create-update" Feb 19 21:19:23 crc kubenswrapper[4886]: I0219 21:19:23.872554 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f78fcbc-7dfd-45f5-8d6a-ce814efb3303" containerName="mariadb-database-create" Feb 19 21:19:23 crc kubenswrapper[4886]: I0219 21:19:23.872562 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="4409a578-5632-41e1-bcb2-015deecc0e1a" containerName="mariadb-account-create-update" Feb 19 21:19:23 crc kubenswrapper[4886]: I0219 21:19:23.872575 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="62ce6552-da24-4a92-9474-47b352bd969e" containerName="mariadb-database-create" Feb 19 21:19:23 crc kubenswrapper[4886]: I0219 21:19:23.872587 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce" containerName="mariadb-account-create-update" Feb 19 21:19:23 crc kubenswrapper[4886]: I0219 21:19:23.872594 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1e6646f-4bdd-478f-a451-a16e2bfc2c08" containerName="mariadb-database-create" Feb 19 21:19:23 crc kubenswrapper[4886]: I0219 21:19:23.873212 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-6w6vv" Feb 19 21:19:23 crc kubenswrapper[4886]: I0219 21:19:23.897589 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-6w6vv"] Feb 19 21:19:23 crc kubenswrapper[4886]: I0219 21:19:23.987066 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrbwg\" (UniqueName: \"kubernetes.io/projected/b7774da6-bf0a-41f3-9ca6-26ca8c567053-kube-api-access-rrbwg\") pod \"mysqld-exporter-openstack-cell1-db-create-6w6vv\" (UID: \"b7774da6-bf0a-41f3-9ca6-26ca8c567053\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-6w6vv" Feb 19 21:19:23 crc kubenswrapper[4886]: I0219 21:19:23.987222 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7774da6-bf0a-41f3-9ca6-26ca8c567053-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-6w6vv\" (UID: \"b7774da6-bf0a-41f3-9ca6-26ca8c567053\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-6w6vv" Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.053606 4886 generic.go:334] "Generic (PLEG): container finished" podID="638a08ec-2f97-4b36-919f-9346af224a16" containerID="739e6580757c5e540a290dbd54c31797b2ccce6cc67cc0b6599a7df349d0575d" exitCode=0 Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.053662 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"638a08ec-2f97-4b36-919f-9346af224a16","Type":"ContainerDied","Data":"739e6580757c5e540a290dbd54c31797b2ccce6cc67cc0b6599a7df349d0575d"} Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.064620 4886 generic.go:334] "Generic (PLEG): container finished" podID="c4270056-5929-46be-bced-090af7fb6761" containerID="ce7d01c8edadb597a63c0ac26ed52d10e15d6cdbd55a9bc4ef1349b5d4fcbdd1" exitCode=0 Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.064691 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c4270056-5929-46be-bced-090af7fb6761","Type":"ContainerDied","Data":"ce7d01c8edadb597a63c0ac26ed52d10e15d6cdbd55a9bc4ef1349b5d4fcbdd1"} Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.073018 4886 generic.go:334] "Generic (PLEG): container finished" podID="9b78f5c0-b665-4723-bddd-e6cccd0fca87" containerID="aca645c0e142a7b5a73ac3674dc61187f5c1702e57f8c676e126f2e579b59e7b" exitCode=0 Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.073118 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9b78f5c0-b665-4723-bddd-e6cccd0fca87","Type":"ContainerDied","Data":"aca645c0e142a7b5a73ac3674dc61187f5c1702e57f8c676e126f2e579b59e7b"} Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.076398 4886 generic.go:334] "Generic (PLEG): container finished" podID="f1ec4082-af5d-46ce-a7ca-88091e668a22" containerID="169100d07ea125113ae9e306e5b8e8dd80d7424de5a2e844aaf2cd8ac5d03566" exitCode=0 Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.076441 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f1ec4082-af5d-46ce-a7ca-88091e668a22","Type":"ContainerDied","Data":"169100d07ea125113ae9e306e5b8e8dd80d7424de5a2e844aaf2cd8ac5d03566"} Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.086113 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-c4ab-account-create-update-zxvnm"] Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.087445 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c4ab-account-create-update-zxvnm" Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.088480 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7774da6-bf0a-41f3-9ca6-26ca8c567053-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-6w6vv\" (UID: \"b7774da6-bf0a-41f3-9ca6-26ca8c567053\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-6w6vv" Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.088621 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrbwg\" (UniqueName: \"kubernetes.io/projected/b7774da6-bf0a-41f3-9ca6-26ca8c567053-kube-api-access-rrbwg\") pod \"mysqld-exporter-openstack-cell1-db-create-6w6vv\" (UID: \"b7774da6-bf0a-41f3-9ca6-26ca8c567053\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-6w6vv" Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.089061 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.089747 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7774da6-bf0a-41f3-9ca6-26ca8c567053-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-6w6vv\" (UID: \"b7774da6-bf0a-41f3-9ca6-26ca8c567053\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-6w6vv" Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.111778 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrbwg\" (UniqueName: \"kubernetes.io/projected/b7774da6-bf0a-41f3-9ca6-26ca8c567053-kube-api-access-rrbwg\") pod \"mysqld-exporter-openstack-cell1-db-create-6w6vv\" (UID: \"b7774da6-bf0a-41f3-9ca6-26ca8c567053\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-6w6vv" Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.117535 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-c4ab-account-create-update-zxvnm"] Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.188952 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-6w6vv" Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.190522 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5022181-22f3-477d-83c1-94270a8c9da3-operator-scripts\") pod \"mysqld-exporter-c4ab-account-create-update-zxvnm\" (UID: \"e5022181-22f3-477d-83c1-94270a8c9da3\") " pod="openstack/mysqld-exporter-c4ab-account-create-update-zxvnm" Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.190600 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqpp2\" (UniqueName: \"kubernetes.io/projected/e5022181-22f3-477d-83c1-94270a8c9da3-kube-api-access-hqpp2\") pod \"mysqld-exporter-c4ab-account-create-update-zxvnm\" (UID: \"e5022181-22f3-477d-83c1-94270a8c9da3\") " pod="openstack/mysqld-exporter-c4ab-account-create-update-zxvnm" Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.293140 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5022181-22f3-477d-83c1-94270a8c9da3-operator-scripts\") pod \"mysqld-exporter-c4ab-account-create-update-zxvnm\" (UID: \"e5022181-22f3-477d-83c1-94270a8c9da3\") " pod="openstack/mysqld-exporter-c4ab-account-create-update-zxvnm" Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.293237 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqpp2\" (UniqueName: \"kubernetes.io/projected/e5022181-22f3-477d-83c1-94270a8c9da3-kube-api-access-hqpp2\") pod \"mysqld-exporter-c4ab-account-create-update-zxvnm\" (UID: \"e5022181-22f3-477d-83c1-94270a8c9da3\") " pod="openstack/mysqld-exporter-c4ab-account-create-update-zxvnm" Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.294154 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5022181-22f3-477d-83c1-94270a8c9da3-operator-scripts\") pod \"mysqld-exporter-c4ab-account-create-update-zxvnm\" (UID: \"e5022181-22f3-477d-83c1-94270a8c9da3\") " pod="openstack/mysqld-exporter-c4ab-account-create-update-zxvnm" Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.310088 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqpp2\" (UniqueName: \"kubernetes.io/projected/e5022181-22f3-477d-83c1-94270a8c9da3-kube-api-access-hqpp2\") pod \"mysqld-exporter-c4ab-account-create-update-zxvnm\" (UID: \"e5022181-22f3-477d-83c1-94270a8c9da3\") " pod="openstack/mysqld-exporter-c4ab-account-create-update-zxvnm" Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.412781 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c4ab-account-create-update-zxvnm" Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.451320 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-z8xjt"] Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.452570 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z8xjt" Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.454526 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.466041 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-z8xjt"] Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.504544 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e585e30-61e9-46d4-9317-561c8e70c60b-operator-scripts\") pod \"root-account-create-update-z8xjt\" (UID: \"2e585e30-61e9-46d4-9317-561c8e70c60b\") " pod="openstack/root-account-create-update-z8xjt" Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.504815 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-487rq\" (UniqueName: \"kubernetes.io/projected/2e585e30-61e9-46d4-9317-561c8e70c60b-kube-api-access-487rq\") pod \"root-account-create-update-z8xjt\" (UID: \"2e585e30-61e9-46d4-9317-561c8e70c60b\") " pod="openstack/root-account-create-update-z8xjt" Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.606142 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e585e30-61e9-46d4-9317-561c8e70c60b-operator-scripts\") pod \"root-account-create-update-z8xjt\" (UID: \"2e585e30-61e9-46d4-9317-561c8e70c60b\") " pod="openstack/root-account-create-update-z8xjt" Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.606217 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-487rq\" (UniqueName: \"kubernetes.io/projected/2e585e30-61e9-46d4-9317-561c8e70c60b-kube-api-access-487rq\") pod \"root-account-create-update-z8xjt\" (UID: \"2e585e30-61e9-46d4-9317-561c8e70c60b\") " pod="openstack/root-account-create-update-z8xjt" Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.606848 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e585e30-61e9-46d4-9317-561c8e70c60b-operator-scripts\") pod \"root-account-create-update-z8xjt\" (UID: \"2e585e30-61e9-46d4-9317-561c8e70c60b\") " pod="openstack/root-account-create-update-z8xjt" Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.624177 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-487rq\" (UniqueName: \"kubernetes.io/projected/2e585e30-61e9-46d4-9317-561c8e70c60b-kube-api-access-487rq\") pod \"root-account-create-update-z8xjt\" (UID: \"2e585e30-61e9-46d4-9317-561c8e70c60b\") " pod="openstack/root-account-create-update-z8xjt" Feb 19 21:19:24 crc kubenswrapper[4886]: I0219 21:19:24.785166 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z8xjt" Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.222694 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-dz5ch" podUID="270e230a-d5f1-40ff-968c-cd3a77504bf8" containerName="ovn-controller" probeResult="failure" output=< Feb 19 21:19:26 crc kubenswrapper[4886]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 19 21:19:26 crc kubenswrapper[4886]: > Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.234481 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-4zsb6" Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.248710 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-4zsb6" Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.498632 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-dz5ch-config-ncl55"] Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.500001 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dz5ch-config-ncl55" Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.513707 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.528195 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dz5ch-config-ncl55"] Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.554066 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-var-run\") pod \"ovn-controller-dz5ch-config-ncl55\" (UID: \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\") " pod="openstack/ovn-controller-dz5ch-config-ncl55" Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.554613 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-additional-scripts\") pod \"ovn-controller-dz5ch-config-ncl55\" (UID: \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\") " pod="openstack/ovn-controller-dz5ch-config-ncl55" Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.554713 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-var-run-ovn\") pod \"ovn-controller-dz5ch-config-ncl55\" (UID: \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\") " pod="openstack/ovn-controller-dz5ch-config-ncl55" Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.554810 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xthk\" (UniqueName: \"kubernetes.io/projected/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-kube-api-access-4xthk\") pod \"ovn-controller-dz5ch-config-ncl55\" (UID: \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\") " pod="openstack/ovn-controller-dz5ch-config-ncl55" Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.554916 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-var-log-ovn\") pod \"ovn-controller-dz5ch-config-ncl55\" (UID: \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\") " pod="openstack/ovn-controller-dz5ch-config-ncl55" Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.555001 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-scripts\") pod \"ovn-controller-dz5ch-config-ncl55\" (UID: \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\") " pod="openstack/ovn-controller-dz5ch-config-ncl55" Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.658109 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-var-run\") pod \"ovn-controller-dz5ch-config-ncl55\" (UID: \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\") " pod="openstack/ovn-controller-dz5ch-config-ncl55" Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.658152 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-additional-scripts\") pod \"ovn-controller-dz5ch-config-ncl55\" (UID: \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\") " pod="openstack/ovn-controller-dz5ch-config-ncl55" Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.658182 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-var-run-ovn\") pod \"ovn-controller-dz5ch-config-ncl55\" (UID: \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\") " pod="openstack/ovn-controller-dz5ch-config-ncl55" Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.658226 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xthk\" (UniqueName: \"kubernetes.io/projected/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-kube-api-access-4xthk\") pod \"ovn-controller-dz5ch-config-ncl55\" (UID: \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\") " pod="openstack/ovn-controller-dz5ch-config-ncl55" Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.658254 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-var-log-ovn\") pod \"ovn-controller-dz5ch-config-ncl55\" (UID: \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\") " pod="openstack/ovn-controller-dz5ch-config-ncl55" Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.658291 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-scripts\") pod \"ovn-controller-dz5ch-config-ncl55\" (UID: \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\") " pod="openstack/ovn-controller-dz5ch-config-ncl55" Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.659185 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-var-run\") pod \"ovn-controller-dz5ch-config-ncl55\" (UID: \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\") " pod="openstack/ovn-controller-dz5ch-config-ncl55" Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.659622 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-additional-scripts\") pod \"ovn-controller-dz5ch-config-ncl55\" (UID: \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\") " pod="openstack/ovn-controller-dz5ch-config-ncl55" Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.659690 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-var-run-ovn\") pod \"ovn-controller-dz5ch-config-ncl55\" (UID: \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\") " pod="openstack/ovn-controller-dz5ch-config-ncl55" Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.660329 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-var-log-ovn\") pod \"ovn-controller-dz5ch-config-ncl55\" (UID: \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\") " pod="openstack/ovn-controller-dz5ch-config-ncl55" Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.661801 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-scripts\") pod \"ovn-controller-dz5ch-config-ncl55\" (UID: \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\") " pod="openstack/ovn-controller-dz5ch-config-ncl55" Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.678275 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xthk\" (UniqueName: \"kubernetes.io/projected/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-kube-api-access-4xthk\") pod \"ovn-controller-dz5ch-config-ncl55\" (UID: \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\") " pod="openstack/ovn-controller-dz5ch-config-ncl55" Feb 19 21:19:26 crc kubenswrapper[4886]: I0219 21:19:26.842407 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dz5ch-config-ncl55" Feb 19 21:19:28 crc kubenswrapper[4886]: I0219 21:19:28.701073 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-c4ab-account-create-update-zxvnm"] Feb 19 21:19:28 crc kubenswrapper[4886]: W0219 21:19:28.702940 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5022181_22f3_477d_83c1_94270a8c9da3.slice/crio-d1623176ef9b14b771957f4ca2cd15efa292d82223d8020d8bcf3dcb05afac13 WatchSource:0}: Error finding container d1623176ef9b14b771957f4ca2cd15efa292d82223d8020d8bcf3dcb05afac13: Status 404 returned error can't find the container with id d1623176ef9b14b771957f4ca2cd15efa292d82223d8020d8bcf3dcb05afac13 Feb 19 21:19:28 crc kubenswrapper[4886]: I0219 21:19:28.724053 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dz5ch-config-ncl55"] Feb 19 21:19:28 crc kubenswrapper[4886]: I0219 21:19:28.736718 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-z8xjt"] Feb 19 21:19:28 crc kubenswrapper[4886]: W0219 21:19:28.737516 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc673bdb1_1bbb_4e9f_96e7_435bd43b10c7.slice/crio-dc06225bba60e09d2d20129e7e2073de96f9fefa26f355ff881515bba52c9b60 WatchSource:0}: Error finding container dc06225bba60e09d2d20129e7e2073de96f9fefa26f355ff881515bba52c9b60: Status 404 returned error can't find the container with id dc06225bba60e09d2d20129e7e2073de96f9fefa26f355ff881515bba52c9b60 Feb 19 21:19:28 crc kubenswrapper[4886]: W0219 21:19:28.740977 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e585e30_61e9_46d4_9317_561c8e70c60b.slice/crio-3f1a294475400068634763a8417fea678d61be028043bd6d5f36086531479eb5 WatchSource:0}: Error finding container 3f1a294475400068634763a8417fea678d61be028043bd6d5f36086531479eb5: Status 404 returned error can't find the container with id 3f1a294475400068634763a8417fea678d61be028043bd6d5f36086531479eb5 Feb 19 21:19:28 crc kubenswrapper[4886]: I0219 21:19:28.909683 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-6w6vv"] Feb 19 21:19:29 crc kubenswrapper[4886]: I0219 21:19:29.140150 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9b78f5c0-b665-4723-bddd-e6cccd0fca87","Type":"ContainerStarted","Data":"6c0345984c9b5a46610664ed23a96388ea630e43827323f49b8151985812b03a"} Feb 19 21:19:29 crc kubenswrapper[4886]: I0219 21:19:29.140402 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 19 21:19:29 crc kubenswrapper[4886]: I0219 21:19:29.141885 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c4ab-account-create-update-zxvnm" event={"ID":"e5022181-22f3-477d-83c1-94270a8c9da3","Type":"ContainerStarted","Data":"d1623176ef9b14b771957f4ca2cd15efa292d82223d8020d8bcf3dcb05afac13"} Feb 19 21:19:29 crc kubenswrapper[4886]: I0219 21:19:29.147048 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-z8xjt" event={"ID":"2e585e30-61e9-46d4-9317-561c8e70c60b","Type":"ContainerStarted","Data":"3f1a294475400068634763a8417fea678d61be028043bd6d5f36086531479eb5"} Feb 19 21:19:29 crc kubenswrapper[4886]: I0219 21:19:29.154241 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f1ec4082-af5d-46ce-a7ca-88091e668a22","Type":"ContainerStarted","Data":"6cf3c17d8e6edaf248c5e7a9f0a46cea2317a9087bde8f5dbc744debfbf4c7f7"} Feb 19 21:19:29 crc kubenswrapper[4886]: I0219 21:19:29.154473 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 21:19:29 crc kubenswrapper[4886]: I0219 21:19:29.157358 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"638a08ec-2f97-4b36-919f-9346af224a16","Type":"ContainerStarted","Data":"2468847edf33194cb7763d1bfea4cb72f84908509d095576c56938e73d2b5637"} Feb 19 21:19:29 crc kubenswrapper[4886]: I0219 21:19:29.157582 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 19 21:19:29 crc kubenswrapper[4886]: I0219 21:19:29.160614 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c4270056-5929-46be-bced-090af7fb6761","Type":"ContainerStarted","Data":"5bb601b75281ac4e071bbe5905d68a6efdf7e789ef3990c61b11a02d4f933751"} Feb 19 21:19:29 crc kubenswrapper[4886]: I0219 21:19:29.160788 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:19:29 crc kubenswrapper[4886]: I0219 21:19:29.162832 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dz5ch-config-ncl55" event={"ID":"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7","Type":"ContainerStarted","Data":"dc06225bba60e09d2d20129e7e2073de96f9fefa26f355ff881515bba52c9b60"} Feb 19 21:19:29 crc kubenswrapper[4886]: I0219 21:19:29.177605 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=61.542085086 podStartE2EDuration="1m13.177586892s" podCreationTimestamp="2026-02-19 21:18:16 +0000 UTC" firstStartedPulling="2026-02-19 21:18:35.520246733 +0000 UTC m=+1146.148089783" lastFinishedPulling="2026-02-19 21:18:47.155748519 +0000 UTC m=+1157.783591589" observedRunningTime="2026-02-19 21:19:29.169240103 +0000 UTC m=+1199.797083163" watchObservedRunningTime="2026-02-19 21:19:29.177586892 +0000 UTC m=+1199.805429942" Feb 19 21:19:29 crc kubenswrapper[4886]: I0219 21:19:29.226180 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=61.063759876 podStartE2EDuration="1m13.226162315s" podCreationTimestamp="2026-02-19 21:18:16 +0000 UTC" firstStartedPulling="2026-02-19 21:18:34.991538615 +0000 UTC m=+1145.619381705" lastFinishedPulling="2026-02-19 21:18:47.153941094 +0000 UTC m=+1157.781784144" observedRunningTime="2026-02-19 21:19:29.203791146 +0000 UTC m=+1199.831634196" watchObservedRunningTime="2026-02-19 21:19:29.226162315 +0000 UTC m=+1199.854005365" Feb 19 21:19:29 crc kubenswrapper[4886]: I0219 21:19:29.243065 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=63.173733926 podStartE2EDuration="1m13.243051057s" podCreationTimestamp="2026-02-19 21:18:16 +0000 UTC" firstStartedPulling="2026-02-19 21:18:36.583647178 +0000 UTC m=+1147.211490228" lastFinishedPulling="2026-02-19 21:18:46.652964279 +0000 UTC m=+1157.280807359" observedRunningTime="2026-02-19 21:19:29.234856262 +0000 UTC m=+1199.862699312" watchObservedRunningTime="2026-02-19 21:19:29.243051057 +0000 UTC m=+1199.870894107" Feb 19 21:19:29 crc kubenswrapper[4886]: I0219 21:19:29.266347 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=61.229353593 podStartE2EDuration="1m13.266330219s" podCreationTimestamp="2026-02-19 21:18:16 +0000 UTC" firstStartedPulling="2026-02-19 21:18:35.116690451 +0000 UTC m=+1145.744533501" lastFinishedPulling="2026-02-19 21:18:47.153667077 +0000 UTC m=+1157.781510127" observedRunningTime="2026-02-19 21:19:29.261653642 +0000 UTC m=+1199.889496692" watchObservedRunningTime="2026-02-19 21:19:29.266330219 +0000 UTC m=+1199.894173269" Feb 19 21:19:31 crc kubenswrapper[4886]: I0219 21:19:31.249526 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-dz5ch" podUID="270e230a-d5f1-40ff-968c-cd3a77504bf8" containerName="ovn-controller" probeResult="failure" output=< Feb 19 21:19:31 crc kubenswrapper[4886]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 19 21:19:31 crc kubenswrapper[4886]: > Feb 19 21:19:32 crc kubenswrapper[4886]: I0219 21:19:32.226409 4886 generic.go:334] "Generic (PLEG): container finished" podID="9608b54a-4daa-4b61-8be4-30a5f155512d" containerID="4da8ac7a980c704c932d24139144bace444d699b30330d567eb125e7178af071" exitCode=0 Feb 19 21:19:32 crc kubenswrapper[4886]: I0219 21:19:32.226504 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wkj8t" event={"ID":"9608b54a-4daa-4b61-8be4-30a5f155512d","Type":"ContainerDied","Data":"4da8ac7a980c704c932d24139144bace444d699b30330d567eb125e7178af071"} Feb 19 21:19:36 crc kubenswrapper[4886]: I0219 21:19:36.228912 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-dz5ch" podUID="270e230a-d5f1-40ff-968c-cd3a77504bf8" containerName="ovn-controller" probeResult="failure" output=< Feb 19 21:19:36 crc kubenswrapper[4886]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 19 21:19:36 crc kubenswrapper[4886]: > Feb 19 21:19:36 crc kubenswrapper[4886]: I0219 21:19:36.896706 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/da7809cc-c661-4b8c-ab78-4f87229b18d1-etc-swift\") pod \"swift-storage-0\" (UID: \"da7809cc-c661-4b8c-ab78-4f87229b18d1\") " pod="openstack/swift-storage-0" Feb 19 21:19:36 crc kubenswrapper[4886]: I0219 21:19:36.903843 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/da7809cc-c661-4b8c-ab78-4f87229b18d1-etc-swift\") pod \"swift-storage-0\" (UID: \"da7809cc-c661-4b8c-ab78-4f87229b18d1\") " pod="openstack/swift-storage-0" Feb 19 21:19:37 crc kubenswrapper[4886]: I0219 21:19:37.158888 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 21:19:38 crc kubenswrapper[4886]: I0219 21:19:38.298196 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="9b78f5c0-b665-4723-bddd-e6cccd0fca87" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 19 21:19:38 crc kubenswrapper[4886]: I0219 21:19:38.341190 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="c4270056-5929-46be-bced-090af7fb6761" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Feb 19 21:19:38 crc kubenswrapper[4886]: I0219 21:19:38.354948 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="638a08ec-2f97-4b36-919f-9346af224a16" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Feb 19 21:19:39 crc kubenswrapper[4886]: I0219 21:19:39.515008 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 19 21:19:40 crc kubenswrapper[4886]: E0219 21:19:40.093186 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 19 21:19:40 crc kubenswrapper[4886]: E0219 21:19:40.093440 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4r8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-wfngr_openstack(f652b0b8-6eee-4ebb-b0ad-e22de89080a6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:19:40 crc kubenswrapper[4886]: E0219 21:19:40.094707 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-wfngr" podUID="f652b0b8-6eee-4ebb-b0ad-e22de89080a6" Feb 19 21:19:40 crc kubenswrapper[4886]: W0219 21:19:40.209107 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7774da6_bf0a_41f3_9ca6_26ca8c567053.slice/crio-b8f188688da930b5d0c2644e83ec10f88c55818655fc9d4197944b4f0030e90a WatchSource:0}: Error finding container b8f188688da930b5d0c2644e83ec10f88c55818655fc9d4197944b4f0030e90a: Status 404 returned error can't find the container with id b8f188688da930b5d0c2644e83ec10f88c55818655fc9d4197944b4f0030e90a Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.309937 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wkj8t" event={"ID":"9608b54a-4daa-4b61-8be4-30a5f155512d","Type":"ContainerDied","Data":"0eadaff11187afcffdf071ac7974ee97f9fdb8ffe604f4579446bd4a1bfbb493"} Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.310139 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0eadaff11187afcffdf071ac7974ee97f9fdb8ffe604f4579446bd4a1bfbb493" Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.312409 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-6w6vv" event={"ID":"b7774da6-bf0a-41f3-9ca6-26ca8c567053","Type":"ContainerStarted","Data":"b8f188688da930b5d0c2644e83ec10f88c55818655fc9d4197944b4f0030e90a"} Feb 19 21:19:40 crc kubenswrapper[4886]: E0219 21:19:40.313953 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-wfngr" podUID="f652b0b8-6eee-4ebb-b0ad-e22de89080a6" Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.345709 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wkj8t" Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.498114 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9608b54a-4daa-4b61-8be4-30a5f155512d-scripts\") pod \"9608b54a-4daa-4b61-8be4-30a5f155512d\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.498501 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9608b54a-4daa-4b61-8be4-30a5f155512d-dispersionconf\") pod \"9608b54a-4daa-4b61-8be4-30a5f155512d\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.498579 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9608b54a-4daa-4b61-8be4-30a5f155512d-swiftconf\") pod \"9608b54a-4daa-4b61-8be4-30a5f155512d\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.498664 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9608b54a-4daa-4b61-8be4-30a5f155512d-combined-ca-bundle\") pod \"9608b54a-4daa-4b61-8be4-30a5f155512d\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.498697 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9608b54a-4daa-4b61-8be4-30a5f155512d-etc-swift\") pod \"9608b54a-4daa-4b61-8be4-30a5f155512d\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.498719 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl7g7\" (UniqueName: \"kubernetes.io/projected/9608b54a-4daa-4b61-8be4-30a5f155512d-kube-api-access-rl7g7\") pod \"9608b54a-4daa-4b61-8be4-30a5f155512d\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.498774 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9608b54a-4daa-4b61-8be4-30a5f155512d-ring-data-devices\") pod \"9608b54a-4daa-4b61-8be4-30a5f155512d\" (UID: \"9608b54a-4daa-4b61-8be4-30a5f155512d\") " Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.501240 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9608b54a-4daa-4b61-8be4-30a5f155512d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "9608b54a-4daa-4b61-8be4-30a5f155512d" (UID: "9608b54a-4daa-4b61-8be4-30a5f155512d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.501969 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9608b54a-4daa-4b61-8be4-30a5f155512d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9608b54a-4daa-4b61-8be4-30a5f155512d" (UID: "9608b54a-4daa-4b61-8be4-30a5f155512d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.512336 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9608b54a-4daa-4b61-8be4-30a5f155512d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "9608b54a-4daa-4b61-8be4-30a5f155512d" (UID: "9608b54a-4daa-4b61-8be4-30a5f155512d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.516669 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9608b54a-4daa-4b61-8be4-30a5f155512d-kube-api-access-rl7g7" (OuterVolumeSpecName: "kube-api-access-rl7g7") pod "9608b54a-4daa-4b61-8be4-30a5f155512d" (UID: "9608b54a-4daa-4b61-8be4-30a5f155512d"). InnerVolumeSpecName "kube-api-access-rl7g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.529760 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9608b54a-4daa-4b61-8be4-30a5f155512d-scripts" (OuterVolumeSpecName: "scripts") pod "9608b54a-4daa-4b61-8be4-30a5f155512d" (UID: "9608b54a-4daa-4b61-8be4-30a5f155512d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.533613 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9608b54a-4daa-4b61-8be4-30a5f155512d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "9608b54a-4daa-4b61-8be4-30a5f155512d" (UID: "9608b54a-4daa-4b61-8be4-30a5f155512d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.536426 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9608b54a-4daa-4b61-8be4-30a5f155512d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9608b54a-4daa-4b61-8be4-30a5f155512d" (UID: "9608b54a-4daa-4b61-8be4-30a5f155512d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.601495 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9608b54a-4daa-4b61-8be4-30a5f155512d-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.601521 4886 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9608b54a-4daa-4b61-8be4-30a5f155512d-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.601530 4886 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9608b54a-4daa-4b61-8be4-30a5f155512d-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.601539 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9608b54a-4daa-4b61-8be4-30a5f155512d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.601547 4886 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9608b54a-4daa-4b61-8be4-30a5f155512d-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.601555 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl7g7\" (UniqueName: \"kubernetes.io/projected/9608b54a-4daa-4b61-8be4-30a5f155512d-kube-api-access-rl7g7\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.601563 4886 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9608b54a-4daa-4b61-8be4-30a5f155512d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:40 crc kubenswrapper[4886]: I0219 21:19:40.868914 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 21:19:40 crc kubenswrapper[4886]: W0219 21:19:40.876154 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda7809cc_c661_4b8c_ab78_4f87229b18d1.slice/crio-309d3e59820e38c687025e2954a6f361842491396833bec4ce6d23102187dd51 WatchSource:0}: Error finding container 309d3e59820e38c687025e2954a6f361842491396833bec4ce6d23102187dd51: Status 404 returned error can't find the container with id 309d3e59820e38c687025e2954a6f361842491396833bec4ce6d23102187dd51 Feb 19 21:19:41 crc kubenswrapper[4886]: I0219 21:19:41.320687 4886 generic.go:334] "Generic (PLEG): container finished" podID="e5022181-22f3-477d-83c1-94270a8c9da3" containerID="720d6621ef921e61233a34373a9f281e31ebed1d3db64d1d798a17262a306598" exitCode=0 Feb 19 21:19:41 crc kubenswrapper[4886]: I0219 21:19:41.320739 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c4ab-account-create-update-zxvnm" event={"ID":"e5022181-22f3-477d-83c1-94270a8c9da3","Type":"ContainerDied","Data":"720d6621ef921e61233a34373a9f281e31ebed1d3db64d1d798a17262a306598"} Feb 19 21:19:41 crc kubenswrapper[4886]: I0219 21:19:41.322349 4886 generic.go:334] "Generic (PLEG): container finished" podID="b7774da6-bf0a-41f3-9ca6-26ca8c567053" containerID="b114aa8efb6597102aad65bd2a9a09282ded2a312a2b4223257dfc74eaa84420" exitCode=0 Feb 19 21:19:41 crc kubenswrapper[4886]: I0219 21:19:41.322395 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-6w6vv" event={"ID":"b7774da6-bf0a-41f3-9ca6-26ca8c567053","Type":"ContainerDied","Data":"b114aa8efb6597102aad65bd2a9a09282ded2a312a2b4223257dfc74eaa84420"} Feb 19 21:19:41 crc kubenswrapper[4886]: I0219 21:19:41.323399 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"da7809cc-c661-4b8c-ab78-4f87229b18d1","Type":"ContainerStarted","Data":"309d3e59820e38c687025e2954a6f361842491396833bec4ce6d23102187dd51"} Feb 19 21:19:41 crc kubenswrapper[4886]: I0219 21:19:41.329887 4886 generic.go:334] "Generic (PLEG): container finished" podID="2e585e30-61e9-46d4-9317-561c8e70c60b" containerID="234ca7b7dd45d32287887986e280c069c3ffc9f932923853052a84dc2ffc4649" exitCode=0 Feb 19 21:19:41 crc kubenswrapper[4886]: I0219 21:19:41.330009 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-z8xjt" event={"ID":"2e585e30-61e9-46d4-9317-561c8e70c60b","Type":"ContainerDied","Data":"234ca7b7dd45d32287887986e280c069c3ffc9f932923853052a84dc2ffc4649"} Feb 19 21:19:41 crc kubenswrapper[4886]: I0219 21:19:41.335521 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8cccf7aa-d200-47d7-96d4-cf6b048f966e","Type":"ContainerStarted","Data":"5901ab19ac9f8cce17ffd4fc604c8959e1c78ad63506751ae97cd5b24739368d"} Feb 19 21:19:41 crc kubenswrapper[4886]: I0219 21:19:41.338796 4886 generic.go:334] "Generic (PLEG): container finished" podID="c673bdb1-1bbb-4e9f-96e7-435bd43b10c7" containerID="96d73c58b46d04c1c265dd1d99f23c8ac2de892994d55a0657ff7b540302976e" exitCode=0 Feb 19 21:19:41 crc kubenswrapper[4886]: I0219 21:19:41.338872 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wkj8t" Feb 19 21:19:41 crc kubenswrapper[4886]: I0219 21:19:41.339694 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dz5ch-config-ncl55" event={"ID":"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7","Type":"ContainerDied","Data":"96d73c58b46d04c1c265dd1d99f23c8ac2de892994d55a0657ff7b540302976e"} Feb 19 21:19:41 crc kubenswrapper[4886]: I0219 21:19:41.342377 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-dz5ch" Feb 19 21:19:41 crc kubenswrapper[4886]: I0219 21:19:41.425282 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=14.721761139 podStartE2EDuration="1m18.42525213s" podCreationTimestamp="2026-02-19 21:18:23 +0000 UTC" firstStartedPulling="2026-02-19 21:18:36.585279519 +0000 UTC m=+1147.213122569" lastFinishedPulling="2026-02-19 21:19:40.28877051 +0000 UTC m=+1210.916613560" observedRunningTime="2026-02-19 21:19:41.423569758 +0000 UTC m=+1212.051412808" watchObservedRunningTime="2026-02-19 21:19:41.42525213 +0000 UTC m=+1212.053095180" Feb 19 21:19:42 crc kubenswrapper[4886]: I0219 21:19:42.865028 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c4ab-account-create-update-zxvnm" Feb 19 21:19:42 crc kubenswrapper[4886]: I0219 21:19:42.956892 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqpp2\" (UniqueName: \"kubernetes.io/projected/e5022181-22f3-477d-83c1-94270a8c9da3-kube-api-access-hqpp2\") pod \"e5022181-22f3-477d-83c1-94270a8c9da3\" (UID: \"e5022181-22f3-477d-83c1-94270a8c9da3\") " Feb 19 21:19:42 crc kubenswrapper[4886]: I0219 21:19:42.957082 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5022181-22f3-477d-83c1-94270a8c9da3-operator-scripts\") pod \"e5022181-22f3-477d-83c1-94270a8c9da3\" (UID: \"e5022181-22f3-477d-83c1-94270a8c9da3\") " Feb 19 21:19:42 crc kubenswrapper[4886]: I0219 21:19:42.957926 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5022181-22f3-477d-83c1-94270a8c9da3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e5022181-22f3-477d-83c1-94270a8c9da3" (UID: "e5022181-22f3-477d-83c1-94270a8c9da3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:42 crc kubenswrapper[4886]: I0219 21:19:42.970568 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5022181-22f3-477d-83c1-94270a8c9da3-kube-api-access-hqpp2" (OuterVolumeSpecName: "kube-api-access-hqpp2") pod "e5022181-22f3-477d-83c1-94270a8c9da3" (UID: "e5022181-22f3-477d-83c1-94270a8c9da3"). InnerVolumeSpecName "kube-api-access-hqpp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.059816 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5022181-22f3-477d-83c1-94270a8c9da3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.059857 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqpp2\" (UniqueName: \"kubernetes.io/projected/e5022181-22f3-477d-83c1-94270a8c9da3-kube-api-access-hqpp2\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.269886 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-6w6vv" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.304951 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dz5ch-config-ncl55" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.317792 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z8xjt" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.364160 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dz5ch-config-ncl55" event={"ID":"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7","Type":"ContainerDied","Data":"dc06225bba60e09d2d20129e7e2073de96f9fefa26f355ff881515bba52c9b60"} Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.364200 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc06225bba60e09d2d20129e7e2073de96f9fefa26f355ff881515bba52c9b60" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.364294 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dz5ch-config-ncl55" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.365862 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrbwg\" (UniqueName: \"kubernetes.io/projected/b7774da6-bf0a-41f3-9ca6-26ca8c567053-kube-api-access-rrbwg\") pod \"b7774da6-bf0a-41f3-9ca6-26ca8c567053\" (UID: \"b7774da6-bf0a-41f3-9ca6-26ca8c567053\") " Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.365950 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7774da6-bf0a-41f3-9ca6-26ca8c567053-operator-scripts\") pod \"b7774da6-bf0a-41f3-9ca6-26ca8c567053\" (UID: \"b7774da6-bf0a-41f3-9ca6-26ca8c567053\") " Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.366894 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7774da6-bf0a-41f3-9ca6-26ca8c567053-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7774da6-bf0a-41f3-9ca6-26ca8c567053" (UID: "b7774da6-bf0a-41f3-9ca6-26ca8c567053"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.367443 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7774da6-bf0a-41f3-9ca6-26ca8c567053-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.368926 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c4ab-account-create-update-zxvnm" event={"ID":"e5022181-22f3-477d-83c1-94270a8c9da3","Type":"ContainerDied","Data":"d1623176ef9b14b771957f4ca2cd15efa292d82223d8020d8bcf3dcb05afac13"} Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.368965 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1623176ef9b14b771957f4ca2cd15efa292d82223d8020d8bcf3dcb05afac13" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.369033 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c4ab-account-create-update-zxvnm" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.371895 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-6w6vv" event={"ID":"b7774da6-bf0a-41f3-9ca6-26ca8c567053","Type":"ContainerDied","Data":"b8f188688da930b5d0c2644e83ec10f88c55818655fc9d4197944b4f0030e90a"} Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.371943 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8f188688da930b5d0c2644e83ec10f88c55818655fc9d4197944b4f0030e90a" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.372005 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-6w6vv" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.374727 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-z8xjt" event={"ID":"2e585e30-61e9-46d4-9317-561c8e70c60b","Type":"ContainerDied","Data":"3f1a294475400068634763a8417fea678d61be028043bd6d5f36086531479eb5"} Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.374753 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f1a294475400068634763a8417fea678d61be028043bd6d5f36086531479eb5" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.374750 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7774da6-bf0a-41f3-9ca6-26ca8c567053-kube-api-access-rrbwg" (OuterVolumeSpecName: "kube-api-access-rrbwg") pod "b7774da6-bf0a-41f3-9ca6-26ca8c567053" (UID: "b7774da6-bf0a-41f3-9ca6-26ca8c567053"). InnerVolumeSpecName "kube-api-access-rrbwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.374800 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z8xjt" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.468074 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-additional-scripts\") pod \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\" (UID: \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\") " Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.468111 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e585e30-61e9-46d4-9317-561c8e70c60b-operator-scripts\") pod \"2e585e30-61e9-46d4-9317-561c8e70c60b\" (UID: \"2e585e30-61e9-46d4-9317-561c8e70c60b\") " Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.468211 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-487rq\" (UniqueName: \"kubernetes.io/projected/2e585e30-61e9-46d4-9317-561c8e70c60b-kube-api-access-487rq\") pod \"2e585e30-61e9-46d4-9317-561c8e70c60b\" (UID: \"2e585e30-61e9-46d4-9317-561c8e70c60b\") " Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.468292 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-var-run-ovn\") pod \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\" (UID: \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\") " Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.468327 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-scripts\") pod \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\" (UID: \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\") " Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.468366 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-var-run\") pod \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\" (UID: \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\") " Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.468410 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-var-log-ovn\") pod \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\" (UID: \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\") " Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.468410 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c673bdb1-1bbb-4e9f-96e7-435bd43b10c7" (UID: "c673bdb1-1bbb-4e9f-96e7-435bd43b10c7"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.468431 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xthk\" (UniqueName: \"kubernetes.io/projected/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-kube-api-access-4xthk\") pod \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\" (UID: \"c673bdb1-1bbb-4e9f-96e7-435bd43b10c7\") " Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.468591 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-var-run" (OuterVolumeSpecName: "var-run") pod "c673bdb1-1bbb-4e9f-96e7-435bd43b10c7" (UID: "c673bdb1-1bbb-4e9f-96e7-435bd43b10c7"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.468685 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c673bdb1-1bbb-4e9f-96e7-435bd43b10c7" (UID: "c673bdb1-1bbb-4e9f-96e7-435bd43b10c7"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.469071 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c673bdb1-1bbb-4e9f-96e7-435bd43b10c7" (UID: "c673bdb1-1bbb-4e9f-96e7-435bd43b10c7"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.469212 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-scripts" (OuterVolumeSpecName: "scripts") pod "c673bdb1-1bbb-4e9f-96e7-435bd43b10c7" (UID: "c673bdb1-1bbb-4e9f-96e7-435bd43b10c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.469509 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e585e30-61e9-46d4-9317-561c8e70c60b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2e585e30-61e9-46d4-9317-561c8e70c60b" (UID: "2e585e30-61e9-46d4-9317-561c8e70c60b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.472226 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrbwg\" (UniqueName: \"kubernetes.io/projected/b7774da6-bf0a-41f3-9ca6-26ca8c567053-kube-api-access-rrbwg\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.472274 4886 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.472289 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.472300 4886 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.472311 4886 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.472326 4886 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.472337 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e585e30-61e9-46d4-9317-561c8e70c60b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.472551 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-kube-api-access-4xthk" (OuterVolumeSpecName: "kube-api-access-4xthk") pod "c673bdb1-1bbb-4e9f-96e7-435bd43b10c7" (UID: "c673bdb1-1bbb-4e9f-96e7-435bd43b10c7"). InnerVolumeSpecName "kube-api-access-4xthk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.472634 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e585e30-61e9-46d4-9317-561c8e70c60b-kube-api-access-487rq" (OuterVolumeSpecName: "kube-api-access-487rq") pod "2e585e30-61e9-46d4-9317-561c8e70c60b" (UID: "2e585e30-61e9-46d4-9317-561c8e70c60b"). InnerVolumeSpecName "kube-api-access-487rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.576776 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xthk\" (UniqueName: \"kubernetes.io/projected/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7-kube-api-access-4xthk\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:43 crc kubenswrapper[4886]: I0219 21:19:43.576818 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-487rq\" (UniqueName: \"kubernetes.io/projected/2e585e30-61e9-46d4-9317-561c8e70c60b-kube-api-access-487rq\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:44 crc kubenswrapper[4886]: I0219 21:19:44.391498 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"da7809cc-c661-4b8c-ab78-4f87229b18d1","Type":"ContainerStarted","Data":"e8b48afe30e568802309933c1d77ee19fa6d0238a846b80ab10688022b15b037"} Feb 19 21:19:44 crc kubenswrapper[4886]: I0219 21:19:44.391554 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"da7809cc-c661-4b8c-ab78-4f87229b18d1","Type":"ContainerStarted","Data":"fd890f118b8d0c5a5bf9a5979e06f5a7e78057c60b7927fd3b0b3d3fb74f1f9a"} Feb 19 21:19:44 crc kubenswrapper[4886]: I0219 21:19:44.391567 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"da7809cc-c661-4b8c-ab78-4f87229b18d1","Type":"ContainerStarted","Data":"0aab583d3a512df32e3b066e9460e920020d06a0421d09ab6cb058198937ded8"} Feb 19 21:19:44 crc kubenswrapper[4886]: I0219 21:19:44.391576 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"da7809cc-c661-4b8c-ab78-4f87229b18d1","Type":"ContainerStarted","Data":"856dc68a820b6bcf759cc01b2cafda1dbc0d428e2ec62d4737b3174fa3ebda5b"} Feb 19 21:19:44 crc kubenswrapper[4886]: I0219 21:19:44.443607 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-dz5ch-config-ncl55"] Feb 19 21:19:44 crc kubenswrapper[4886]: I0219 21:19:44.452099 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-dz5ch-config-ncl55"] Feb 19 21:19:44 crc kubenswrapper[4886]: I0219 21:19:44.611909 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c673bdb1-1bbb-4e9f-96e7-435bd43b10c7" path="/var/lib/kubelet/pods/c673bdb1-1bbb-4e9f-96e7-435bd43b10c7/volumes" Feb 19 21:19:44 crc kubenswrapper[4886]: I0219 21:19:44.969987 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:47 crc kubenswrapper[4886]: I0219 21:19:47.926493 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f1ec4082-af5d-46ce-a7ca-88091e668a22" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 19 21:19:48 crc kubenswrapper[4886]: I0219 21:19:48.296937 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="9b78f5c0-b665-4723-bddd-e6cccd0fca87" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 19 21:19:48 crc kubenswrapper[4886]: I0219 21:19:48.341458 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:19:48 crc kubenswrapper[4886]: I0219 21:19:48.351943 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="638a08ec-2f97-4b36-919f-9346af224a16" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Feb 19 21:19:48 crc kubenswrapper[4886]: I0219 21:19:48.497431 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"da7809cc-c661-4b8c-ab78-4f87229b18d1","Type":"ContainerStarted","Data":"cc0675207a657276969ff6948bd52b92f3722f1bfaaf1515774df66fa5f3f149"} Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.082071 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 19 21:19:49 crc kubenswrapper[4886]: E0219 21:19:49.085066 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c673bdb1-1bbb-4e9f-96e7-435bd43b10c7" containerName="ovn-config" Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.085094 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c673bdb1-1bbb-4e9f-96e7-435bd43b10c7" containerName="ovn-config" Feb 19 21:19:49 crc kubenswrapper[4886]: E0219 21:19:49.085122 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9608b54a-4daa-4b61-8be4-30a5f155512d" containerName="swift-ring-rebalance" Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.085131 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9608b54a-4daa-4b61-8be4-30a5f155512d" containerName="swift-ring-rebalance" Feb 19 21:19:49 crc kubenswrapper[4886]: E0219 21:19:49.085148 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5022181-22f3-477d-83c1-94270a8c9da3" containerName="mariadb-account-create-update" Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.085156 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5022181-22f3-477d-83c1-94270a8c9da3" containerName="mariadb-account-create-update" Feb 19 21:19:49 crc kubenswrapper[4886]: E0219 21:19:49.085171 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e585e30-61e9-46d4-9317-561c8e70c60b" containerName="mariadb-account-create-update" Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.085178 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e585e30-61e9-46d4-9317-561c8e70c60b" containerName="mariadb-account-create-update" Feb 19 21:19:49 crc kubenswrapper[4886]: E0219 21:19:49.085204 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7774da6-bf0a-41f3-9ca6-26ca8c567053" containerName="mariadb-database-create" Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.085211 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7774da6-bf0a-41f3-9ca6-26ca8c567053" containerName="mariadb-database-create" Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.085450 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e585e30-61e9-46d4-9317-561c8e70c60b" containerName="mariadb-account-create-update" Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.085480 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5022181-22f3-477d-83c1-94270a8c9da3" containerName="mariadb-account-create-update" Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.085495 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="9608b54a-4daa-4b61-8be4-30a5f155512d" containerName="swift-ring-rebalance" Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.085517 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7774da6-bf0a-41f3-9ca6-26ca8c567053" containerName="mariadb-database-create" Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.085532 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c673bdb1-1bbb-4e9f-96e7-435bd43b10c7" containerName="ovn-config" Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.086367 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.092527 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.116800 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.207878 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk6sq\" (UniqueName: \"kubernetes.io/projected/5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd-kube-api-access-hk6sq\") pod \"mysqld-exporter-0\" (UID: \"5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd\") " pod="openstack/mysqld-exporter-0" Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.207934 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd\") " pod="openstack/mysqld-exporter-0" Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.208100 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd-config-data\") pod \"mysqld-exporter-0\" (UID: \"5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd\") " pod="openstack/mysqld-exporter-0" Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.309726 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk6sq\" (UniqueName: \"kubernetes.io/projected/5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd-kube-api-access-hk6sq\") pod \"mysqld-exporter-0\" (UID: \"5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd\") " pod="openstack/mysqld-exporter-0" Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.310125 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd\") " pod="openstack/mysqld-exporter-0" Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.310411 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd-config-data\") pod \"mysqld-exporter-0\" (UID: \"5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd\") " pod="openstack/mysqld-exporter-0" Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.317283 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd-config-data\") pod \"mysqld-exporter-0\" (UID: \"5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd\") " pod="openstack/mysqld-exporter-0" Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.336788 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd\") " pod="openstack/mysqld-exporter-0" Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.340927 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk6sq\" (UniqueName: \"kubernetes.io/projected/5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd-kube-api-access-hk6sq\") pod \"mysqld-exporter-0\" (UID: \"5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd\") " pod="openstack/mysqld-exporter-0" Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.509387 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"da7809cc-c661-4b8c-ab78-4f87229b18d1","Type":"ContainerStarted","Data":"0b3197179526378f87bf6b7d30dac9c3063f0c3db19735f9bf3046eb6f7032ac"} Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.509696 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"da7809cc-c661-4b8c-ab78-4f87229b18d1","Type":"ContainerStarted","Data":"b2c093db1592657c4e8c184b173d7fa9f5145bd7e688e23b812ed3f838507701"} Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.509706 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"da7809cc-c661-4b8c-ab78-4f87229b18d1","Type":"ContainerStarted","Data":"64d8b06793a1628f18e50f9c78a8508d9d26079b657e020320c741d2359b68e6"} Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.521851 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 19 21:19:49 crc kubenswrapper[4886]: I0219 21:19:49.833317 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 19 21:19:49 crc kubenswrapper[4886]: W0219 21:19:49.837035 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5232d9c6_d6bc_4e29_ba43_b3fb8ccb8fcd.slice/crio-8b5f9309a91b928673b08928277c711abee3387369836cea2682959f1f0f857c WatchSource:0}: Error finding container 8b5f9309a91b928673b08928277c711abee3387369836cea2682959f1f0f857c: Status 404 returned error can't find the container with id 8b5f9309a91b928673b08928277c711abee3387369836cea2682959f1f0f857c Feb 19 21:19:50 crc kubenswrapper[4886]: I0219 21:19:50.522200 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd","Type":"ContainerStarted","Data":"8b5f9309a91b928673b08928277c711abee3387369836cea2682959f1f0f857c"} Feb 19 21:19:51 crc kubenswrapper[4886]: I0219 21:19:51.544626 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"da7809cc-c661-4b8c-ab78-4f87229b18d1","Type":"ContainerStarted","Data":"a7b372805595ca0c0603173f13d8ecdd4d0fe6bec427970b33315a2e263f0056"} Feb 19 21:19:52 crc kubenswrapper[4886]: I0219 21:19:52.560702 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"da7809cc-c661-4b8c-ab78-4f87229b18d1","Type":"ContainerStarted","Data":"1db7205b39c3d01b0e494329429c1e8addb9d403a2b53e2ce603605587399064"} Feb 19 21:19:53 crc kubenswrapper[4886]: I0219 21:19:53.570299 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd","Type":"ContainerStarted","Data":"2622b6ba4ef66855489ea8bf6cb4d8c8c7ff92e63e3c380e8626086fdefda345"} Feb 19 21:19:53 crc kubenswrapper[4886]: I0219 21:19:53.572157 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wfngr" event={"ID":"f652b0b8-6eee-4ebb-b0ad-e22de89080a6","Type":"ContainerStarted","Data":"91dfd157308c118eb74b6d09d731bed9c28d03ea9fb702843e79541cda906dda"} Feb 19 21:19:53 crc kubenswrapper[4886]: I0219 21:19:53.580885 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"da7809cc-c661-4b8c-ab78-4f87229b18d1","Type":"ContainerStarted","Data":"32f77cade52399cde3263d1e735182e6113a52a51bba186d62be7a0227611389"} Feb 19 21:19:53 crc kubenswrapper[4886]: I0219 21:19:53.580922 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"da7809cc-c661-4b8c-ab78-4f87229b18d1","Type":"ContainerStarted","Data":"d4c2946bdc0411e6b4f3e766675ef01307b5eb6e500428a1621065d994ff50ea"} Feb 19 21:19:53 crc kubenswrapper[4886]: I0219 21:19:53.580930 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"da7809cc-c661-4b8c-ab78-4f87229b18d1","Type":"ContainerStarted","Data":"efe8b584d32653b3df25b35159d257c501d91d96bf80fc0f5949fc984a94375b"} Feb 19 21:19:53 crc kubenswrapper[4886]: I0219 21:19:53.580940 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"da7809cc-c661-4b8c-ab78-4f87229b18d1","Type":"ContainerStarted","Data":"a286cbd6692e8ad7fbf06f56f5a7e2182b6fca5d9ad9d52b8df71a1600a89d8e"} Feb 19 21:19:53 crc kubenswrapper[4886]: I0219 21:19:53.596160 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=1.653375367 podStartE2EDuration="4.59614345s" podCreationTimestamp="2026-02-19 21:19:49 +0000 UTC" firstStartedPulling="2026-02-19 21:19:49.84688459 +0000 UTC m=+1220.474727640" lastFinishedPulling="2026-02-19 21:19:52.789652673 +0000 UTC m=+1223.417495723" observedRunningTime="2026-02-19 21:19:53.59053367 +0000 UTC m=+1224.218376710" watchObservedRunningTime="2026-02-19 21:19:53.59614345 +0000 UTC m=+1224.223986500" Feb 19 21:19:53 crc kubenswrapper[4886]: I0219 21:19:53.618974 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-wfngr" podStartSLOduration=6.061482345 podStartE2EDuration="37.61895656s" podCreationTimestamp="2026-02-19 21:19:16 +0000 UTC" firstStartedPulling="2026-02-19 21:19:21.30870597 +0000 UTC m=+1191.936549020" lastFinishedPulling="2026-02-19 21:19:52.866180185 +0000 UTC m=+1223.494023235" observedRunningTime="2026-02-19 21:19:53.608316995 +0000 UTC m=+1224.236160045" watchObservedRunningTime="2026-02-19 21:19:53.61895656 +0000 UTC m=+1224.246799610" Feb 19 21:19:54 crc kubenswrapper[4886]: I0219 21:19:54.625829 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"da7809cc-c661-4b8c-ab78-4f87229b18d1","Type":"ContainerStarted","Data":"f652ff5db29ec15b716c76bdb4e3e0727530a21d28892187d45b02dc7d9fb612"} Feb 19 21:19:54 crc kubenswrapper[4886]: I0219 21:19:54.702257 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=41.649826414 podStartE2EDuration="51.702225252s" podCreationTimestamp="2026-02-19 21:19:03 +0000 UTC" firstStartedPulling="2026-02-19 21:19:40.879832996 +0000 UTC m=+1211.507676046" lastFinishedPulling="2026-02-19 21:19:50.932231824 +0000 UTC m=+1221.560074884" observedRunningTime="2026-02-19 21:19:54.687006112 +0000 UTC m=+1225.314849212" watchObservedRunningTime="2026-02-19 21:19:54.702225252 +0000 UTC m=+1225.330068332" Feb 19 21:19:54 crc kubenswrapper[4886]: I0219 21:19:54.970440 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:54 crc kubenswrapper[4886]: I0219 21:19:54.974476 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:55 crc kubenswrapper[4886]: I0219 21:19:55.020533 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-29fmc"] Feb 19 21:19:55 crc kubenswrapper[4886]: I0219 21:19:55.022312 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-29fmc" Feb 19 21:19:55 crc kubenswrapper[4886]: I0219 21:19:55.023802 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 19 21:19:55 crc kubenswrapper[4886]: I0219 21:19:55.028846 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-29fmc"] Feb 19 21:19:55 crc kubenswrapper[4886]: I0219 21:19:55.033019 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-29fmc\" (UID: \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\") " pod="openstack/dnsmasq-dns-764c5664d7-29fmc" Feb 19 21:19:55 crc kubenswrapper[4886]: I0219 21:19:55.033177 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-29fmc\" (UID: \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\") " pod="openstack/dnsmasq-dns-764c5664d7-29fmc" Feb 19 21:19:55 crc kubenswrapper[4886]: I0219 21:19:55.033198 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x9cs\" (UniqueName: \"kubernetes.io/projected/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-kube-api-access-7x9cs\") pod \"dnsmasq-dns-764c5664d7-29fmc\" (UID: \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\") " pod="openstack/dnsmasq-dns-764c5664d7-29fmc" Feb 19 21:19:55 crc kubenswrapper[4886]: I0219 21:19:55.033241 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-29fmc\" (UID: \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\") " pod="openstack/dnsmasq-dns-764c5664d7-29fmc" Feb 19 21:19:55 crc kubenswrapper[4886]: I0219 21:19:55.033312 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-config\") pod \"dnsmasq-dns-764c5664d7-29fmc\" (UID: \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\") " pod="openstack/dnsmasq-dns-764c5664d7-29fmc" Feb 19 21:19:55 crc kubenswrapper[4886]: I0219 21:19:55.033345 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-dns-svc\") pod \"dnsmasq-dns-764c5664d7-29fmc\" (UID: \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\") " pod="openstack/dnsmasq-dns-764c5664d7-29fmc" Feb 19 21:19:55 crc kubenswrapper[4886]: I0219 21:19:55.135192 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-config\") pod \"dnsmasq-dns-764c5664d7-29fmc\" (UID: \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\") " pod="openstack/dnsmasq-dns-764c5664d7-29fmc" Feb 19 21:19:55 crc kubenswrapper[4886]: I0219 21:19:55.135252 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-dns-svc\") pod \"dnsmasq-dns-764c5664d7-29fmc\" (UID: \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\") " pod="openstack/dnsmasq-dns-764c5664d7-29fmc" Feb 19 21:19:55 crc kubenswrapper[4886]: I0219 21:19:55.135422 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-29fmc\" (UID: \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\") " pod="openstack/dnsmasq-dns-764c5664d7-29fmc" Feb 19 21:19:55 crc kubenswrapper[4886]: I0219 21:19:55.135547 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-29fmc\" (UID: \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\") " pod="openstack/dnsmasq-dns-764c5664d7-29fmc" Feb 19 21:19:55 crc kubenswrapper[4886]: I0219 21:19:55.135569 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x9cs\" (UniqueName: \"kubernetes.io/projected/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-kube-api-access-7x9cs\") pod \"dnsmasq-dns-764c5664d7-29fmc\" (UID: \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\") " pod="openstack/dnsmasq-dns-764c5664d7-29fmc" Feb 19 21:19:55 crc kubenswrapper[4886]: I0219 21:19:55.135599 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-29fmc\" (UID: \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\") " pod="openstack/dnsmasq-dns-764c5664d7-29fmc" Feb 19 21:19:55 crc kubenswrapper[4886]: I0219 21:19:55.136230 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-config\") pod \"dnsmasq-dns-764c5664d7-29fmc\" (UID: \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\") " pod="openstack/dnsmasq-dns-764c5664d7-29fmc" Feb 19 21:19:55 crc kubenswrapper[4886]: I0219 21:19:55.136323 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-dns-svc\") pod \"dnsmasq-dns-764c5664d7-29fmc\" (UID: \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\") " pod="openstack/dnsmasq-dns-764c5664d7-29fmc" Feb 19 21:19:55 crc kubenswrapper[4886]: I0219 21:19:55.136598 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-29fmc\" (UID: \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\") " pod="openstack/dnsmasq-dns-764c5664d7-29fmc" Feb 19 21:19:55 crc kubenswrapper[4886]: I0219 21:19:55.136685 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-29fmc\" (UID: \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\") " pod="openstack/dnsmasq-dns-764c5664d7-29fmc" Feb 19 21:19:55 crc kubenswrapper[4886]: I0219 21:19:55.137066 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-29fmc\" (UID: \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\") " pod="openstack/dnsmasq-dns-764c5664d7-29fmc" Feb 19 21:19:55 crc kubenswrapper[4886]: I0219 21:19:55.154126 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x9cs\" (UniqueName: \"kubernetes.io/projected/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-kube-api-access-7x9cs\") pod \"dnsmasq-dns-764c5664d7-29fmc\" (UID: \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\") " pod="openstack/dnsmasq-dns-764c5664d7-29fmc" Feb 19 21:19:55 crc kubenswrapper[4886]: I0219 21:19:55.343954 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-29fmc" Feb 19 21:19:55 crc kubenswrapper[4886]: I0219 21:19:55.624034 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:55 crc kubenswrapper[4886]: I0219 21:19:55.930998 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-29fmc"] Feb 19 21:19:56 crc kubenswrapper[4886]: I0219 21:19:56.630585 4886 generic.go:334] "Generic (PLEG): container finished" podID="f2bb1ddd-f03a-4236-b0ea-b35e551916e5" containerID="ce4b951eb6b17dba557bef6c9f5bad151540f3499a1f9585e04c0b7c8f5b073a" exitCode=0 Feb 19 21:19:56 crc kubenswrapper[4886]: I0219 21:19:56.630650 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-29fmc" event={"ID":"f2bb1ddd-f03a-4236-b0ea-b35e551916e5","Type":"ContainerDied","Data":"ce4b951eb6b17dba557bef6c9f5bad151540f3499a1f9585e04c0b7c8f5b073a"} Feb 19 21:19:56 crc kubenswrapper[4886]: I0219 21:19:56.631063 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-29fmc" event={"ID":"f2bb1ddd-f03a-4236-b0ea-b35e551916e5","Type":"ContainerStarted","Data":"5f0d2077afc6ad99fc19cf667f4e4039c1c94661574c01181eda6e962060e477"} Feb 19 21:19:57 crc kubenswrapper[4886]: I0219 21:19:57.640481 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-29fmc" event={"ID":"f2bb1ddd-f03a-4236-b0ea-b35e551916e5","Type":"ContainerStarted","Data":"aa77ba5b5db818dabf56bfe61c4ddee323d1a77018c113fd78c042c9786b1786"} Feb 19 21:19:57 crc kubenswrapper[4886]: I0219 21:19:57.641052 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-29fmc" Feb 19 21:19:57 crc kubenswrapper[4886]: I0219 21:19:57.664806 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-29fmc" podStartSLOduration=3.664787209 podStartE2EDuration="3.664787209s" podCreationTimestamp="2026-02-19 21:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:19:57.659554338 +0000 UTC m=+1228.287397398" watchObservedRunningTime="2026-02-19 21:19:57.664787209 +0000 UTC m=+1228.292630259" Feb 19 21:19:57 crc kubenswrapper[4886]: I0219 21:19:57.926481 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 21:19:58 crc kubenswrapper[4886]: I0219 21:19:58.135656 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 21:19:58 crc kubenswrapper[4886]: I0219 21:19:58.135985 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8cccf7aa-d200-47d7-96d4-cf6b048f966e" containerName="config-reloader" containerID="cri-o://e32970c64c452030d86770e4c7ef38d6c3995f1e790412d8632dbc857d44dece" gracePeriod=600 Feb 19 21:19:58 crc kubenswrapper[4886]: I0219 21:19:58.136001 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8cccf7aa-d200-47d7-96d4-cf6b048f966e" containerName="thanos-sidecar" containerID="cri-o://5901ab19ac9f8cce17ffd4fc604c8959e1c78ad63506751ae97cd5b24739368d" gracePeriod=600 Feb 19 21:19:58 crc kubenswrapper[4886]: I0219 21:19:58.135930 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8cccf7aa-d200-47d7-96d4-cf6b048f966e" containerName="prometheus" containerID="cri-o://d0b3590f64c180736d21b24d642ec1bda0d86e2e15e7ade3315d60658dac0252" gracePeriod=600 Feb 19 21:19:58 crc kubenswrapper[4886]: I0219 21:19:58.308580 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 19 21:19:58 crc kubenswrapper[4886]: I0219 21:19:58.352456 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 19 21:19:58 crc kubenswrapper[4886]: I0219 21:19:58.684037 4886 generic.go:334] "Generic (PLEG): container finished" podID="8cccf7aa-d200-47d7-96d4-cf6b048f966e" containerID="5901ab19ac9f8cce17ffd4fc604c8959e1c78ad63506751ae97cd5b24739368d" exitCode=0 Feb 19 21:19:58 crc kubenswrapper[4886]: I0219 21:19:58.684304 4886 generic.go:334] "Generic (PLEG): container finished" podID="8cccf7aa-d200-47d7-96d4-cf6b048f966e" containerID="e32970c64c452030d86770e4c7ef38d6c3995f1e790412d8632dbc857d44dece" exitCode=0 Feb 19 21:19:58 crc kubenswrapper[4886]: I0219 21:19:58.684313 4886 generic.go:334] "Generic (PLEG): container finished" podID="8cccf7aa-d200-47d7-96d4-cf6b048f966e" containerID="d0b3590f64c180736d21b24d642ec1bda0d86e2e15e7ade3315d60658dac0252" exitCode=0 Feb 19 21:19:58 crc kubenswrapper[4886]: I0219 21:19:58.684194 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8cccf7aa-d200-47d7-96d4-cf6b048f966e","Type":"ContainerDied","Data":"5901ab19ac9f8cce17ffd4fc604c8959e1c78ad63506751ae97cd5b24739368d"} Feb 19 21:19:58 crc kubenswrapper[4886]: I0219 21:19:58.684933 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8cccf7aa-d200-47d7-96d4-cf6b048f966e","Type":"ContainerDied","Data":"e32970c64c452030d86770e4c7ef38d6c3995f1e790412d8632dbc857d44dece"} Feb 19 21:19:58 crc kubenswrapper[4886]: I0219 21:19:58.684947 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8cccf7aa-d200-47d7-96d4-cf6b048f966e","Type":"ContainerDied","Data":"d0b3590f64c180736d21b24d642ec1bda0d86e2e15e7ade3315d60658dac0252"} Feb 19 21:19:58 crc kubenswrapper[4886]: I0219 21:19:58.910639 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.016771 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcprz\" (UniqueName: \"kubernetes.io/projected/8cccf7aa-d200-47d7-96d4-cf6b048f966e-kube-api-access-qcprz\") pod \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.017002 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8cccf7aa-d200-47d7-96d4-cf6b048f966e-config-out\") pod \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.017049 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8cccf7aa-d200-47d7-96d4-cf6b048f966e-web-config\") pod \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.017128 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8cccf7aa-d200-47d7-96d4-cf6b048f966e-prometheus-metric-storage-rulefiles-0\") pod \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.017183 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8cccf7aa-d200-47d7-96d4-cf6b048f966e-tls-assets\") pod \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.017217 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8cccf7aa-d200-47d7-96d4-cf6b048f966e-prometheus-metric-storage-rulefiles-1\") pod \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.017287 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8cccf7aa-d200-47d7-96d4-cf6b048f966e-thanos-prometheus-http-client-file\") pod \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.017533 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5b2e059c-8bc1-4af3-a4f2-656b79634a4e\") pod \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.017565 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8cccf7aa-d200-47d7-96d4-cf6b048f966e-config\") pod \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.017604 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8cccf7aa-d200-47d7-96d4-cf6b048f966e-prometheus-metric-storage-rulefiles-2\") pod \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\" (UID: \"8cccf7aa-d200-47d7-96d4-cf6b048f966e\") " Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.018770 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cccf7aa-d200-47d7-96d4-cf6b048f966e-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "8cccf7aa-d200-47d7-96d4-cf6b048f966e" (UID: "8cccf7aa-d200-47d7-96d4-cf6b048f966e"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.018850 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cccf7aa-d200-47d7-96d4-cf6b048f966e-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "8cccf7aa-d200-47d7-96d4-cf6b048f966e" (UID: "8cccf7aa-d200-47d7-96d4-cf6b048f966e"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.019165 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cccf7aa-d200-47d7-96d4-cf6b048f966e-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "8cccf7aa-d200-47d7-96d4-cf6b048f966e" (UID: "8cccf7aa-d200-47d7-96d4-cf6b048f966e"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.024525 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cccf7aa-d200-47d7-96d4-cf6b048f966e-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "8cccf7aa-d200-47d7-96d4-cf6b048f966e" (UID: "8cccf7aa-d200-47d7-96d4-cf6b048f966e"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.024630 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cccf7aa-d200-47d7-96d4-cf6b048f966e-kube-api-access-qcprz" (OuterVolumeSpecName: "kube-api-access-qcprz") pod "8cccf7aa-d200-47d7-96d4-cf6b048f966e" (UID: "8cccf7aa-d200-47d7-96d4-cf6b048f966e"). InnerVolumeSpecName "kube-api-access-qcprz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.025873 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cccf7aa-d200-47d7-96d4-cf6b048f966e-config" (OuterVolumeSpecName: "config") pod "8cccf7aa-d200-47d7-96d4-cf6b048f966e" (UID: "8cccf7aa-d200-47d7-96d4-cf6b048f966e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.026587 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cccf7aa-d200-47d7-96d4-cf6b048f966e-config-out" (OuterVolumeSpecName: "config-out") pod "8cccf7aa-d200-47d7-96d4-cf6b048f966e" (UID: "8cccf7aa-d200-47d7-96d4-cf6b048f966e"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.028356 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cccf7aa-d200-47d7-96d4-cf6b048f966e-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "8cccf7aa-d200-47d7-96d4-cf6b048f966e" (UID: "8cccf7aa-d200-47d7-96d4-cf6b048f966e"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.048540 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5b2e059c-8bc1-4af3-a4f2-656b79634a4e" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "8cccf7aa-d200-47d7-96d4-cf6b048f966e" (UID: "8cccf7aa-d200-47d7-96d4-cf6b048f966e"). InnerVolumeSpecName "pvc-5b2e059c-8bc1-4af3-a4f2-656b79634a4e". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.054356 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cccf7aa-d200-47d7-96d4-cf6b048f966e-web-config" (OuterVolumeSpecName: "web-config") pod "8cccf7aa-d200-47d7-96d4-cf6b048f966e" (UID: "8cccf7aa-d200-47d7-96d4-cf6b048f966e"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.119876 4886 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8cccf7aa-d200-47d7-96d4-cf6b048f966e-config-out\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.119910 4886 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8cccf7aa-d200-47d7-96d4-cf6b048f966e-web-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.119920 4886 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8cccf7aa-d200-47d7-96d4-cf6b048f966e-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.119929 4886 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8cccf7aa-d200-47d7-96d4-cf6b048f966e-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.119943 4886 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8cccf7aa-d200-47d7-96d4-cf6b048f966e-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.119955 4886 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8cccf7aa-d200-47d7-96d4-cf6b048f966e-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.119994 4886 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-5b2e059c-8bc1-4af3-a4f2-656b79634a4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5b2e059c-8bc1-4af3-a4f2-656b79634a4e\") on node \"crc\" " Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.120005 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8cccf7aa-d200-47d7-96d4-cf6b048f966e-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.120015 4886 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8cccf7aa-d200-47d7-96d4-cf6b048f966e-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.120025 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcprz\" (UniqueName: \"kubernetes.io/projected/8cccf7aa-d200-47d7-96d4-cf6b048f966e-kube-api-access-qcprz\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.142069 4886 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.142249 4886 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-5b2e059c-8bc1-4af3-a4f2-656b79634a4e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5b2e059c-8bc1-4af3-a4f2-656b79634a4e") on node "crc" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.221851 4886 reconciler_common.go:293] "Volume detached for volume \"pvc-5b2e059c-8bc1-4af3-a4f2-656b79634a4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5b2e059c-8bc1-4af3-a4f2-656b79634a4e\") on node \"crc\" DevicePath \"\"" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.695481 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8cccf7aa-d200-47d7-96d4-cf6b048f966e","Type":"ContainerDied","Data":"6b8f4475273da307d316594838be7feb0516f55c7a912d65a71527f93b1aad03"} Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.695536 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.695549 4886 scope.go:117] "RemoveContainer" containerID="5901ab19ac9f8cce17ffd4fc604c8959e1c78ad63506751ae97cd5b24739368d" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.723780 4886 scope.go:117] "RemoveContainer" containerID="e32970c64c452030d86770e4c7ef38d6c3995f1e790412d8632dbc857d44dece" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.744528 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.752963 4886 scope.go:117] "RemoveContainer" containerID="d0b3590f64c180736d21b24d642ec1bda0d86e2e15e7ade3315d60658dac0252" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.759899 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.779755 4886 scope.go:117] "RemoveContainer" containerID="67c38c89f5bb22932e22d47703fc0fe71f7a8b826a0f8ee230b5ad9693ac5bcc" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.789410 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 21:19:59 crc kubenswrapper[4886]: E0219 21:19:59.789981 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cccf7aa-d200-47d7-96d4-cf6b048f966e" containerName="prometheus" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.790003 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cccf7aa-d200-47d7-96d4-cf6b048f966e" containerName="prometheus" Feb 19 21:19:59 crc kubenswrapper[4886]: E0219 21:19:59.790027 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cccf7aa-d200-47d7-96d4-cf6b048f966e" containerName="init-config-reloader" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.790041 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cccf7aa-d200-47d7-96d4-cf6b048f966e" containerName="init-config-reloader" Feb 19 21:19:59 crc kubenswrapper[4886]: E0219 21:19:59.790067 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cccf7aa-d200-47d7-96d4-cf6b048f966e" containerName="thanos-sidecar" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.790075 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cccf7aa-d200-47d7-96d4-cf6b048f966e" containerName="thanos-sidecar" Feb 19 21:19:59 crc kubenswrapper[4886]: E0219 21:19:59.790100 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cccf7aa-d200-47d7-96d4-cf6b048f966e" containerName="config-reloader" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.790109 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cccf7aa-d200-47d7-96d4-cf6b048f966e" containerName="config-reloader" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.790477 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cccf7aa-d200-47d7-96d4-cf6b048f966e" containerName="thanos-sidecar" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.790508 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cccf7aa-d200-47d7-96d4-cf6b048f966e" containerName="prometheus" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.790524 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cccf7aa-d200-47d7-96d4-cf6b048f966e" containerName="config-reloader" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.793342 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.797117 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.797311 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.797443 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-jvtfj" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.797575 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.797692 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.797697 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.797909 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.810726 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.815076 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.818577 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.837923 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5b2e059c-8bc1-4af3-a4f2-656b79634a4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5b2e059c-8bc1-4af3-a4f2-656b79634a4e\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.838358 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5fd4db44-9fcc-4954-9896-7f47be765647-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.838443 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5fd4db44-9fcc-4954-9896-7f47be765647-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.838483 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5fd4db44-9fcc-4954-9896-7f47be765647-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.838511 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5fd4db44-9fcc-4954-9896-7f47be765647-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.839625 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5fd4db44-9fcc-4954-9896-7f47be765647-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.839757 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5fd4db44-9fcc-4954-9896-7f47be765647-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.839822 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5fd4db44-9fcc-4954-9896-7f47be765647-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.839914 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5fd4db44-9fcc-4954-9896-7f47be765647-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.841357 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fd4db44-9fcc-4954-9896-7f47be765647-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.841453 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5fd4db44-9fcc-4954-9896-7f47be765647-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.841543 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp8hp\" (UniqueName: \"kubernetes.io/projected/5fd4db44-9fcc-4954-9896-7f47be765647-kube-api-access-jp8hp\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.841619 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5fd4db44-9fcc-4954-9896-7f47be765647-config\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.943544 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5fd4db44-9fcc-4954-9896-7f47be765647-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.943606 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5fd4db44-9fcc-4954-9896-7f47be765647-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.943634 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5fd4db44-9fcc-4954-9896-7f47be765647-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.943693 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5fd4db44-9fcc-4954-9896-7f47be765647-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.943738 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5fd4db44-9fcc-4954-9896-7f47be765647-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.943805 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5fd4db44-9fcc-4954-9896-7f47be765647-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.944422 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5fd4db44-9fcc-4954-9896-7f47be765647-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.944490 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fd4db44-9fcc-4954-9896-7f47be765647-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.944544 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5fd4db44-9fcc-4954-9896-7f47be765647-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.944612 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp8hp\" (UniqueName: \"kubernetes.io/projected/5fd4db44-9fcc-4954-9896-7f47be765647-kube-api-access-jp8hp\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.944741 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5fd4db44-9fcc-4954-9896-7f47be765647-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.945190 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5fd4db44-9fcc-4954-9896-7f47be765647-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.945214 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5fd4db44-9fcc-4954-9896-7f47be765647-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.945440 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5fd4db44-9fcc-4954-9896-7f47be765647-config\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.945779 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5b2e059c-8bc1-4af3-a4f2-656b79634a4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5b2e059c-8bc1-4af3-a4f2-656b79634a4e\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.946734 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5fd4db44-9fcc-4954-9896-7f47be765647-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.948501 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5fd4db44-9fcc-4954-9896-7f47be765647-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.949204 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fd4db44-9fcc-4954-9896-7f47be765647-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.949200 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5fd4db44-9fcc-4954-9896-7f47be765647-config\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.950757 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5fd4db44-9fcc-4954-9896-7f47be765647-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.952882 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5fd4db44-9fcc-4954-9896-7f47be765647-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.953975 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5fd4db44-9fcc-4954-9896-7f47be765647-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.954773 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.954802 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5b2e059c-8bc1-4af3-a4f2-656b79634a4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5b2e059c-8bc1-4af3-a4f2-656b79634a4e\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4b24d278a8e1561de69dc574b3cb2f98927434e79938fd0747c818d910fbafa9/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.955502 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5fd4db44-9fcc-4954-9896-7f47be765647-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.957513 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5fd4db44-9fcc-4954-9896-7f47be765647-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.966823 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp8hp\" (UniqueName: \"kubernetes.io/projected/5fd4db44-9fcc-4954-9896-7f47be765647-kube-api-access-jp8hp\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:19:59 crc kubenswrapper[4886]: I0219 21:19:59.993778 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5b2e059c-8bc1-4af3-a4f2-656b79634a4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5b2e059c-8bc1-4af3-a4f2-656b79634a4e\") pod \"prometheus-metric-storage-0\" (UID: \"5fd4db44-9fcc-4954-9896-7f47be765647\") " pod="openstack/prometheus-metric-storage-0" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.134243 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.417777 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-cq5sp"] Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.419645 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cq5sp" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.432993 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-cq5sp"] Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.455015 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnmsr\" (UniqueName: \"kubernetes.io/projected/248cf37f-1bcc-4904-bba2-8d0398f694df-kube-api-access-bnmsr\") pod \"cinder-db-create-cq5sp\" (UID: \"248cf37f-1bcc-4904-bba2-8d0398f694df\") " pod="openstack/cinder-db-create-cq5sp" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.455195 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/248cf37f-1bcc-4904-bba2-8d0398f694df-operator-scripts\") pod \"cinder-db-create-cq5sp\" (UID: \"248cf37f-1bcc-4904-bba2-8d0398f694df\") " pod="openstack/cinder-db-create-cq5sp" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.537429 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-9d9f-account-create-update-ld55p"] Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.539182 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9d9f-account-create-update-ld55p" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.545382 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9d9f-account-create-update-ld55p"] Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.553121 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.560023 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnmsr\" (UniqueName: \"kubernetes.io/projected/248cf37f-1bcc-4904-bba2-8d0398f694df-kube-api-access-bnmsr\") pod \"cinder-db-create-cq5sp\" (UID: \"248cf37f-1bcc-4904-bba2-8d0398f694df\") " pod="openstack/cinder-db-create-cq5sp" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.560243 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/248cf37f-1bcc-4904-bba2-8d0398f694df-operator-scripts\") pod \"cinder-db-create-cq5sp\" (UID: \"248cf37f-1bcc-4904-bba2-8d0398f694df\") " pod="openstack/cinder-db-create-cq5sp" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.561239 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/248cf37f-1bcc-4904-bba2-8d0398f694df-operator-scripts\") pod \"cinder-db-create-cq5sp\" (UID: \"248cf37f-1bcc-4904-bba2-8d0398f694df\") " pod="openstack/cinder-db-create-cq5sp" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.621028 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnmsr\" (UniqueName: \"kubernetes.io/projected/248cf37f-1bcc-4904-bba2-8d0398f694df-kube-api-access-bnmsr\") pod \"cinder-db-create-cq5sp\" (UID: \"248cf37f-1bcc-4904-bba2-8d0398f694df\") " pod="openstack/cinder-db-create-cq5sp" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.632235 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cccf7aa-d200-47d7-96d4-cf6b048f966e" path="/var/lib/kubelet/pods/8cccf7aa-d200-47d7-96d4-cf6b048f966e/volumes" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.668437 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6850c0dd-a8d1-4fa7-83d5-e224be6efcd4-operator-scripts\") pod \"cinder-9d9f-account-create-update-ld55p\" (UID: \"6850c0dd-a8d1-4fa7-83d5-e224be6efcd4\") " pod="openstack/cinder-9d9f-account-create-update-ld55p" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.668566 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t97sl\" (UniqueName: \"kubernetes.io/projected/6850c0dd-a8d1-4fa7-83d5-e224be6efcd4-kube-api-access-t97sl\") pod \"cinder-9d9f-account-create-update-ld55p\" (UID: \"6850c0dd-a8d1-4fa7-83d5-e224be6efcd4\") " pod="openstack/cinder-9d9f-account-create-update-ld55p" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.695060 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.717163 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-ngwj9"] Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.718723 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-ngwj9" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.719426 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5fd4db44-9fcc-4954-9896-7f47be765647","Type":"ContainerStarted","Data":"c855fcf6ef3cd6c2d02f6ec5636bfcc7c33bc58384c2fe0449069b16547c61e0"} Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.725840 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-ngwj9"] Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.742534 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cq5sp" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.747131 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-c1e3-account-create-update-s6vdn"] Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.748501 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c1e3-account-create-update-s6vdn" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.751048 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.765848 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-c1e3-account-create-update-s6vdn"] Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.774514 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afb8222a-a662-49c8-89f5-7fe1f193adfd-operator-scripts\") pod \"heat-db-create-ngwj9\" (UID: \"afb8222a-a662-49c8-89f5-7fe1f193adfd\") " pod="openstack/heat-db-create-ngwj9" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.774579 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6850c0dd-a8d1-4fa7-83d5-e224be6efcd4-operator-scripts\") pod \"cinder-9d9f-account-create-update-ld55p\" (UID: \"6850c0dd-a8d1-4fa7-83d5-e224be6efcd4\") " pod="openstack/cinder-9d9f-account-create-update-ld55p" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.774659 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t97sl\" (UniqueName: \"kubernetes.io/projected/6850c0dd-a8d1-4fa7-83d5-e224be6efcd4-kube-api-access-t97sl\") pod \"cinder-9d9f-account-create-update-ld55p\" (UID: \"6850c0dd-a8d1-4fa7-83d5-e224be6efcd4\") " pod="openstack/cinder-9d9f-account-create-update-ld55p" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.774704 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2xwj\" (UniqueName: \"kubernetes.io/projected/afb8222a-a662-49c8-89f5-7fe1f193adfd-kube-api-access-s2xwj\") pod \"heat-db-create-ngwj9\" (UID: \"afb8222a-a662-49c8-89f5-7fe1f193adfd\") " pod="openstack/heat-db-create-ngwj9" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.779718 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6850c0dd-a8d1-4fa7-83d5-e224be6efcd4-operator-scripts\") pod \"cinder-9d9f-account-create-update-ld55p\" (UID: \"6850c0dd-a8d1-4fa7-83d5-e224be6efcd4\") " pod="openstack/cinder-9d9f-account-create-update-ld55p" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.807356 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-twxkg"] Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.808828 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-twxkg" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.809577 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t97sl\" (UniqueName: \"kubernetes.io/projected/6850c0dd-a8d1-4fa7-83d5-e224be6efcd4-kube-api-access-t97sl\") pod \"cinder-9d9f-account-create-update-ld55p\" (UID: \"6850c0dd-a8d1-4fa7-83d5-e224be6efcd4\") " pod="openstack/cinder-9d9f-account-create-update-ld55p" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.812710 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.812814 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.812949 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.813065 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gw5vn" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.823412 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-twxkg"] Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.835088 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-js4vm"] Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.836429 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-js4vm" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.844507 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-js4vm"] Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.863749 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9d9f-account-create-update-ld55p" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.877229 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zspsb\" (UniqueName: \"kubernetes.io/projected/54149206-2ab9-4e9e-be08-86d91ea986f9-kube-api-access-zspsb\") pod \"keystone-db-sync-twxkg\" (UID: \"54149206-2ab9-4e9e-be08-86d91ea986f9\") " pod="openstack/keystone-db-sync-twxkg" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.877330 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9741740a-a65c-49f5-8cdb-156b0d3037ec-operator-scripts\") pod \"heat-c1e3-account-create-update-s6vdn\" (UID: \"9741740a-a65c-49f5-8cdb-156b0d3037ec\") " pod="openstack/heat-c1e3-account-create-update-s6vdn" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.877414 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a-operator-scripts\") pod \"barbican-db-create-js4vm\" (UID: \"8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a\") " pod="openstack/barbican-db-create-js4vm" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.877591 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afb8222a-a662-49c8-89f5-7fe1f193adfd-operator-scripts\") pod \"heat-db-create-ngwj9\" (UID: \"afb8222a-a662-49c8-89f5-7fe1f193adfd\") " pod="openstack/heat-db-create-ngwj9" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.877814 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjx8k\" (UniqueName: \"kubernetes.io/projected/8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a-kube-api-access-kjx8k\") pod \"barbican-db-create-js4vm\" (UID: \"8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a\") " pod="openstack/barbican-db-create-js4vm" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.877892 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc4tl\" (UniqueName: \"kubernetes.io/projected/9741740a-a65c-49f5-8cdb-156b0d3037ec-kube-api-access-dc4tl\") pod \"heat-c1e3-account-create-update-s6vdn\" (UID: \"9741740a-a65c-49f5-8cdb-156b0d3037ec\") " pod="openstack/heat-c1e3-account-create-update-s6vdn" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.877957 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54149206-2ab9-4e9e-be08-86d91ea986f9-combined-ca-bundle\") pod \"keystone-db-sync-twxkg\" (UID: \"54149206-2ab9-4e9e-be08-86d91ea986f9\") " pod="openstack/keystone-db-sync-twxkg" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.878049 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54149206-2ab9-4e9e-be08-86d91ea986f9-config-data\") pod \"keystone-db-sync-twxkg\" (UID: \"54149206-2ab9-4e9e-be08-86d91ea986f9\") " pod="openstack/keystone-db-sync-twxkg" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.878088 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2xwj\" (UniqueName: \"kubernetes.io/projected/afb8222a-a662-49c8-89f5-7fe1f193adfd-kube-api-access-s2xwj\") pod \"heat-db-create-ngwj9\" (UID: \"afb8222a-a662-49c8-89f5-7fe1f193adfd\") " pod="openstack/heat-db-create-ngwj9" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.879322 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afb8222a-a662-49c8-89f5-7fe1f193adfd-operator-scripts\") pod \"heat-db-create-ngwj9\" (UID: \"afb8222a-a662-49c8-89f5-7fe1f193adfd\") " pod="openstack/heat-db-create-ngwj9" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.904013 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2xwj\" (UniqueName: \"kubernetes.io/projected/afb8222a-a662-49c8-89f5-7fe1f193adfd-kube-api-access-s2xwj\") pod \"heat-db-create-ngwj9\" (UID: \"afb8222a-a662-49c8-89f5-7fe1f193adfd\") " pod="openstack/heat-db-create-ngwj9" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.921342 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-5058-account-create-update-zsfmc"] Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.922794 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5058-account-create-update-zsfmc" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.925695 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.954137 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5058-account-create-update-zsfmc"] Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.979807 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjx8k\" (UniqueName: \"kubernetes.io/projected/8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a-kube-api-access-kjx8k\") pod \"barbican-db-create-js4vm\" (UID: \"8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a\") " pod="openstack/barbican-db-create-js4vm" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.995323 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc4tl\" (UniqueName: \"kubernetes.io/projected/9741740a-a65c-49f5-8cdb-156b0d3037ec-kube-api-access-dc4tl\") pod \"heat-c1e3-account-create-update-s6vdn\" (UID: \"9741740a-a65c-49f5-8cdb-156b0d3037ec\") " pod="openstack/heat-c1e3-account-create-update-s6vdn" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.995383 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16-operator-scripts\") pod \"barbican-5058-account-create-update-zsfmc\" (UID: \"4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16\") " pod="openstack/barbican-5058-account-create-update-zsfmc" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.995402 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54149206-2ab9-4e9e-be08-86d91ea986f9-combined-ca-bundle\") pod \"keystone-db-sync-twxkg\" (UID: \"54149206-2ab9-4e9e-be08-86d91ea986f9\") " pod="openstack/keystone-db-sync-twxkg" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.995507 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54149206-2ab9-4e9e-be08-86d91ea986f9-config-data\") pod \"keystone-db-sync-twxkg\" (UID: \"54149206-2ab9-4e9e-be08-86d91ea986f9\") " pod="openstack/keystone-db-sync-twxkg" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.995620 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qpwh\" (UniqueName: \"kubernetes.io/projected/4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16-kube-api-access-2qpwh\") pod \"barbican-5058-account-create-update-zsfmc\" (UID: \"4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16\") " pod="openstack/barbican-5058-account-create-update-zsfmc" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.995657 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zspsb\" (UniqueName: \"kubernetes.io/projected/54149206-2ab9-4e9e-be08-86d91ea986f9-kube-api-access-zspsb\") pod \"keystone-db-sync-twxkg\" (UID: \"54149206-2ab9-4e9e-be08-86d91ea986f9\") " pod="openstack/keystone-db-sync-twxkg" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.995705 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9741740a-a65c-49f5-8cdb-156b0d3037ec-operator-scripts\") pod \"heat-c1e3-account-create-update-s6vdn\" (UID: \"9741740a-a65c-49f5-8cdb-156b0d3037ec\") " pod="openstack/heat-c1e3-account-create-update-s6vdn" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.995777 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a-operator-scripts\") pod \"barbican-db-create-js4vm\" (UID: \"8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a\") " pod="openstack/barbican-db-create-js4vm" Feb 19 21:20:00 crc kubenswrapper[4886]: I0219 21:20:00.996653 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a-operator-scripts\") pod \"barbican-db-create-js4vm\" (UID: \"8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a\") " pod="openstack/barbican-db-create-js4vm" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.003146 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54149206-2ab9-4e9e-be08-86d91ea986f9-combined-ca-bundle\") pod \"keystone-db-sync-twxkg\" (UID: \"54149206-2ab9-4e9e-be08-86d91ea986f9\") " pod="openstack/keystone-db-sync-twxkg" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.003736 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9741740a-a65c-49f5-8cdb-156b0d3037ec-operator-scripts\") pod \"heat-c1e3-account-create-update-s6vdn\" (UID: \"9741740a-a65c-49f5-8cdb-156b0d3037ec\") " pod="openstack/heat-c1e3-account-create-update-s6vdn" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.008381 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54149206-2ab9-4e9e-be08-86d91ea986f9-config-data\") pod \"keystone-db-sync-twxkg\" (UID: \"54149206-2ab9-4e9e-be08-86d91ea986f9\") " pod="openstack/keystone-db-sync-twxkg" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.014091 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjx8k\" (UniqueName: \"kubernetes.io/projected/8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a-kube-api-access-kjx8k\") pod \"barbican-db-create-js4vm\" (UID: \"8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a\") " pod="openstack/barbican-db-create-js4vm" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.023283 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc4tl\" (UniqueName: \"kubernetes.io/projected/9741740a-a65c-49f5-8cdb-156b0d3037ec-kube-api-access-dc4tl\") pod \"heat-c1e3-account-create-update-s6vdn\" (UID: \"9741740a-a65c-49f5-8cdb-156b0d3037ec\") " pod="openstack/heat-c1e3-account-create-update-s6vdn" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.028397 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zspsb\" (UniqueName: \"kubernetes.io/projected/54149206-2ab9-4e9e-be08-86d91ea986f9-kube-api-access-zspsb\") pod \"keystone-db-sync-twxkg\" (UID: \"54149206-2ab9-4e9e-be08-86d91ea986f9\") " pod="openstack/keystone-db-sync-twxkg" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.033062 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-98fw7"] Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.036608 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-ngwj9" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.036698 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-98fw7" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.051861 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-2b4a-account-create-update-zqq2h"] Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.070462 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-98fw7"] Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.071055 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c1e3-account-create-update-s6vdn" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.071648 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2b4a-account-create-update-zqq2h" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.075346 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.081325 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2b4a-account-create-update-zqq2h"] Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.100011 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22179eed-5249-458a-b35d-7f934c393c87-operator-scripts\") pod \"neutron-2b4a-account-create-update-zqq2h\" (UID: \"22179eed-5249-458a-b35d-7f934c393c87\") " pod="openstack/neutron-2b4a-account-create-update-zqq2h" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.100159 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjmkn\" (UniqueName: \"kubernetes.io/projected/4a2f507c-d8b5-46b9-82f7-dd0e5be787c4-kube-api-access-qjmkn\") pod \"neutron-db-create-98fw7\" (UID: \"4a2f507c-d8b5-46b9-82f7-dd0e5be787c4\") " pod="openstack/neutron-db-create-98fw7" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.100202 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f92f9\" (UniqueName: \"kubernetes.io/projected/22179eed-5249-458a-b35d-7f934c393c87-kube-api-access-f92f9\") pod \"neutron-2b4a-account-create-update-zqq2h\" (UID: \"22179eed-5249-458a-b35d-7f934c393c87\") " pod="openstack/neutron-2b4a-account-create-update-zqq2h" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.100255 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16-operator-scripts\") pod \"barbican-5058-account-create-update-zsfmc\" (UID: \"4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16\") " pod="openstack/barbican-5058-account-create-update-zsfmc" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.102708 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16-operator-scripts\") pod \"barbican-5058-account-create-update-zsfmc\" (UID: \"4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16\") " pod="openstack/barbican-5058-account-create-update-zsfmc" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.103590 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-twxkg" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.103643 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a2f507c-d8b5-46b9-82f7-dd0e5be787c4-operator-scripts\") pod \"neutron-db-create-98fw7\" (UID: \"4a2f507c-d8b5-46b9-82f7-dd0e5be787c4\") " pod="openstack/neutron-db-create-98fw7" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.104286 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qpwh\" (UniqueName: \"kubernetes.io/projected/4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16-kube-api-access-2qpwh\") pod \"barbican-5058-account-create-update-zsfmc\" (UID: \"4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16\") " pod="openstack/barbican-5058-account-create-update-zsfmc" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.117339 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-js4vm" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.121840 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qpwh\" (UniqueName: \"kubernetes.io/projected/4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16-kube-api-access-2qpwh\") pod \"barbican-5058-account-create-update-zsfmc\" (UID: \"4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16\") " pod="openstack/barbican-5058-account-create-update-zsfmc" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.129452 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5058-account-create-update-zsfmc" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.205604 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a2f507c-d8b5-46b9-82f7-dd0e5be787c4-operator-scripts\") pod \"neutron-db-create-98fw7\" (UID: \"4a2f507c-d8b5-46b9-82f7-dd0e5be787c4\") " pod="openstack/neutron-db-create-98fw7" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.205736 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22179eed-5249-458a-b35d-7f934c393c87-operator-scripts\") pod \"neutron-2b4a-account-create-update-zqq2h\" (UID: \"22179eed-5249-458a-b35d-7f934c393c87\") " pod="openstack/neutron-2b4a-account-create-update-zqq2h" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.205816 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjmkn\" (UniqueName: \"kubernetes.io/projected/4a2f507c-d8b5-46b9-82f7-dd0e5be787c4-kube-api-access-qjmkn\") pod \"neutron-db-create-98fw7\" (UID: \"4a2f507c-d8b5-46b9-82f7-dd0e5be787c4\") " pod="openstack/neutron-db-create-98fw7" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.205843 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f92f9\" (UniqueName: \"kubernetes.io/projected/22179eed-5249-458a-b35d-7f934c393c87-kube-api-access-f92f9\") pod \"neutron-2b4a-account-create-update-zqq2h\" (UID: \"22179eed-5249-458a-b35d-7f934c393c87\") " pod="openstack/neutron-2b4a-account-create-update-zqq2h" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.207719 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a2f507c-d8b5-46b9-82f7-dd0e5be787c4-operator-scripts\") pod \"neutron-db-create-98fw7\" (UID: \"4a2f507c-d8b5-46b9-82f7-dd0e5be787c4\") " pod="openstack/neutron-db-create-98fw7" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.208523 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22179eed-5249-458a-b35d-7f934c393c87-operator-scripts\") pod \"neutron-2b4a-account-create-update-zqq2h\" (UID: \"22179eed-5249-458a-b35d-7f934c393c87\") " pod="openstack/neutron-2b4a-account-create-update-zqq2h" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.223978 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjmkn\" (UniqueName: \"kubernetes.io/projected/4a2f507c-d8b5-46b9-82f7-dd0e5be787c4-kube-api-access-qjmkn\") pod \"neutron-db-create-98fw7\" (UID: \"4a2f507c-d8b5-46b9-82f7-dd0e5be787c4\") " pod="openstack/neutron-db-create-98fw7" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.227461 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f92f9\" (UniqueName: \"kubernetes.io/projected/22179eed-5249-458a-b35d-7f934c393c87-kube-api-access-f92f9\") pod \"neutron-2b4a-account-create-update-zqq2h\" (UID: \"22179eed-5249-458a-b35d-7f934c393c87\") " pod="openstack/neutron-2b4a-account-create-update-zqq2h" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.382670 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-cq5sp"] Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.443897 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-98fw7" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.463406 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2b4a-account-create-update-zqq2h" Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.566653 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-ngwj9"] Feb 19 21:20:01 crc kubenswrapper[4886]: W0219 21:20:01.639861 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafb8222a_a662_49c8_89f5_7fe1f193adfd.slice/crio-b943ec8fc5af877ad65b647605a8bc15c951afd5c54e6a51329a4773811d769e WatchSource:0}: Error finding container b943ec8fc5af877ad65b647605a8bc15c951afd5c54e6a51329a4773811d769e: Status 404 returned error can't find the container with id b943ec8fc5af877ad65b647605a8bc15c951afd5c54e6a51329a4773811d769e Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.640161 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9d9f-account-create-update-ld55p"] Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.739799 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-ngwj9" event={"ID":"afb8222a-a662-49c8-89f5-7fe1f193adfd","Type":"ContainerStarted","Data":"b943ec8fc5af877ad65b647605a8bc15c951afd5c54e6a51329a4773811d769e"} Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.747878 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9d9f-account-create-update-ld55p" event={"ID":"6850c0dd-a8d1-4fa7-83d5-e224be6efcd4","Type":"ContainerStarted","Data":"eee12da8a3d1ec11f268774eaad7562d52442361dae61ca83bbc70bff3f2ecd0"} Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.749898 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cq5sp" event={"ID":"248cf37f-1bcc-4904-bba2-8d0398f694df","Type":"ContainerStarted","Data":"90638fbe482833aad4c6ba2cbdfe72c1eae1a85e36ba09f5e30f0c9caeece239"} Feb 19 21:20:01 crc kubenswrapper[4886]: I0219 21:20:01.858951 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-js4vm"] Feb 19 21:20:02 crc kubenswrapper[4886]: I0219 21:20:02.064202 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-c1e3-account-create-update-s6vdn"] Feb 19 21:20:02 crc kubenswrapper[4886]: I0219 21:20:02.076967 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-twxkg"] Feb 19 21:20:02 crc kubenswrapper[4886]: I0219 21:20:02.129723 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5058-account-create-update-zsfmc"] Feb 19 21:20:02 crc kubenswrapper[4886]: I0219 21:20:02.157217 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-98fw7"] Feb 19 21:20:02 crc kubenswrapper[4886]: I0219 21:20:02.358191 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2b4a-account-create-update-zqq2h"] Feb 19 21:20:02 crc kubenswrapper[4886]: I0219 21:20:02.759803 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-98fw7" event={"ID":"4a2f507c-d8b5-46b9-82f7-dd0e5be787c4","Type":"ContainerStarted","Data":"93bdf727a387f1df9247764a78bd9ab90f19d1fc30e39f84fb4c6754c2beb891"} Feb 19 21:20:02 crc kubenswrapper[4886]: I0219 21:20:02.762148 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c1e3-account-create-update-s6vdn" event={"ID":"9741740a-a65c-49f5-8cdb-156b0d3037ec","Type":"ContainerStarted","Data":"f101af5fccc49a53f0c01652c0944b7c0615198be7c71ae108c12d9855f46f31"} Feb 19 21:20:02 crc kubenswrapper[4886]: I0219 21:20:02.763561 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-js4vm" event={"ID":"8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a","Type":"ContainerStarted","Data":"31d57ba50e3584bbd09e6d91a9005f33f9a8b7dcd721a458766e37bf3779e6ab"} Feb 19 21:20:02 crc kubenswrapper[4886]: I0219 21:20:02.763668 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-js4vm" event={"ID":"8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a","Type":"ContainerStarted","Data":"d7717039c3d6ebedee2b794ace64b5c482b42d49448142c6d7341a6240e83161"} Feb 19 21:20:02 crc kubenswrapper[4886]: I0219 21:20:02.767855 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-ngwj9" event={"ID":"afb8222a-a662-49c8-89f5-7fe1f193adfd","Type":"ContainerStarted","Data":"f0574f2c0097bc69f4a4c95ddc2bb03e213f0fed2e56eaf98cfa2c71a6d7d97f"} Feb 19 21:20:02 crc kubenswrapper[4886]: I0219 21:20:02.771726 4886 generic.go:334] "Generic (PLEG): container finished" podID="6850c0dd-a8d1-4fa7-83d5-e224be6efcd4" containerID="df09e54b41593dbb719f60ec1e93c8fafaf7787afc72b1f8f5aefc96a8560505" exitCode=0 Feb 19 21:20:02 crc kubenswrapper[4886]: I0219 21:20:02.771836 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9d9f-account-create-update-ld55p" event={"ID":"6850c0dd-a8d1-4fa7-83d5-e224be6efcd4","Type":"ContainerDied","Data":"df09e54b41593dbb719f60ec1e93c8fafaf7787afc72b1f8f5aefc96a8560505"} Feb 19 21:20:02 crc kubenswrapper[4886]: I0219 21:20:02.775229 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5058-account-create-update-zsfmc" event={"ID":"4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16","Type":"ContainerStarted","Data":"179ed607b04840d3418f1ac5c0089784c6fb8a269c750b911c32e4d6d0169df3"} Feb 19 21:20:02 crc kubenswrapper[4886]: I0219 21:20:02.775288 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5058-account-create-update-zsfmc" event={"ID":"4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16","Type":"ContainerStarted","Data":"10d45526b5bdcc04138c112038cb32bccef80c97de9d7dc6c6c125038ea8c28e"} Feb 19 21:20:02 crc kubenswrapper[4886]: I0219 21:20:02.777691 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-twxkg" event={"ID":"54149206-2ab9-4e9e-be08-86d91ea986f9","Type":"ContainerStarted","Data":"28c8f67eb4467b5be1115179f835e2bf37c2fd9e6575df20bc6760ab3190630e"} Feb 19 21:20:02 crc kubenswrapper[4886]: I0219 21:20:02.782955 4886 generic.go:334] "Generic (PLEG): container finished" podID="248cf37f-1bcc-4904-bba2-8d0398f694df" containerID="cfc82c668ef18d4fec54685f8f41f5fdb8c6748e1b51de731079d7d3ccecd280" exitCode=0 Feb 19 21:20:02 crc kubenswrapper[4886]: I0219 21:20:02.783076 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cq5sp" event={"ID":"248cf37f-1bcc-4904-bba2-8d0398f694df","Type":"ContainerDied","Data":"cfc82c668ef18d4fec54685f8f41f5fdb8c6748e1b51de731079d7d3ccecd280"} Feb 19 21:20:02 crc kubenswrapper[4886]: I0219 21:20:02.785230 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2b4a-account-create-update-zqq2h" event={"ID":"22179eed-5249-458a-b35d-7f934c393c87","Type":"ContainerStarted","Data":"c644b6e9ecef1268c95c9aefcb2698d9163b2f47c92b9fbb0ebbbc9647522a43"} Feb 19 21:20:02 crc kubenswrapper[4886]: I0219 21:20:02.801692 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-js4vm" podStartSLOduration=2.8016713429999998 podStartE2EDuration="2.801671343s" podCreationTimestamp="2026-02-19 21:20:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:20:02.780805858 +0000 UTC m=+1233.408648908" watchObservedRunningTime="2026-02-19 21:20:02.801671343 +0000 UTC m=+1233.429514393" Feb 19 21:20:02 crc kubenswrapper[4886]: I0219 21:20:02.806527 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-5058-account-create-update-zsfmc" podStartSLOduration=2.806513973 podStartE2EDuration="2.806513973s" podCreationTimestamp="2026-02-19 21:20:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:20:02.799301535 +0000 UTC m=+1233.427144585" watchObservedRunningTime="2026-02-19 21:20:02.806513973 +0000 UTC m=+1233.434357023" Feb 19 21:20:02 crc kubenswrapper[4886]: I0219 21:20:02.847423 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-ngwj9" podStartSLOduration=2.847404532 podStartE2EDuration="2.847404532s" podCreationTimestamp="2026-02-19 21:20:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:20:02.835948399 +0000 UTC m=+1233.463791449" watchObservedRunningTime="2026-02-19 21:20:02.847404532 +0000 UTC m=+1233.475247582" Feb 19 21:20:03 crc kubenswrapper[4886]: I0219 21:20:03.809119 4886 generic.go:334] "Generic (PLEG): container finished" podID="4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16" containerID="179ed607b04840d3418f1ac5c0089784c6fb8a269c750b911c32e4d6d0169df3" exitCode=0 Feb 19 21:20:03 crc kubenswrapper[4886]: I0219 21:20:03.809480 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5058-account-create-update-zsfmc" event={"ID":"4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16","Type":"ContainerDied","Data":"179ed607b04840d3418f1ac5c0089784c6fb8a269c750b911c32e4d6d0169df3"} Feb 19 21:20:03 crc kubenswrapper[4886]: I0219 21:20:03.815254 4886 generic.go:334] "Generic (PLEG): container finished" podID="22179eed-5249-458a-b35d-7f934c393c87" containerID="096c3dd21cce6e083128ca288bbf85534c699ceac1e9e3c7bf351bf4ee4f6fa7" exitCode=0 Feb 19 21:20:03 crc kubenswrapper[4886]: I0219 21:20:03.815618 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2b4a-account-create-update-zqq2h" event={"ID":"22179eed-5249-458a-b35d-7f934c393c87","Type":"ContainerDied","Data":"096c3dd21cce6e083128ca288bbf85534c699ceac1e9e3c7bf351bf4ee4f6fa7"} Feb 19 21:20:03 crc kubenswrapper[4886]: I0219 21:20:03.819509 4886 generic.go:334] "Generic (PLEG): container finished" podID="8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a" containerID="31d57ba50e3584bbd09e6d91a9005f33f9a8b7dcd721a458766e37bf3779e6ab" exitCode=0 Feb 19 21:20:03 crc kubenswrapper[4886]: I0219 21:20:03.819566 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-js4vm" event={"ID":"8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a","Type":"ContainerDied","Data":"31d57ba50e3584bbd09e6d91a9005f33f9a8b7dcd721a458766e37bf3779e6ab"} Feb 19 21:20:03 crc kubenswrapper[4886]: I0219 21:20:03.822308 4886 generic.go:334] "Generic (PLEG): container finished" podID="4a2f507c-d8b5-46b9-82f7-dd0e5be787c4" containerID="719c992027f342d6e8a7d8c3155c8387e91f74d02a6dac127beb416cbd0e07da" exitCode=0 Feb 19 21:20:03 crc kubenswrapper[4886]: I0219 21:20:03.822430 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-98fw7" event={"ID":"4a2f507c-d8b5-46b9-82f7-dd0e5be787c4","Type":"ContainerDied","Data":"719c992027f342d6e8a7d8c3155c8387e91f74d02a6dac127beb416cbd0e07da"} Feb 19 21:20:03 crc kubenswrapper[4886]: I0219 21:20:03.825767 4886 generic.go:334] "Generic (PLEG): container finished" podID="afb8222a-a662-49c8-89f5-7fe1f193adfd" containerID="f0574f2c0097bc69f4a4c95ddc2bb03e213f0fed2e56eaf98cfa2c71a6d7d97f" exitCode=0 Feb 19 21:20:03 crc kubenswrapper[4886]: I0219 21:20:03.826152 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-ngwj9" event={"ID":"afb8222a-a662-49c8-89f5-7fe1f193adfd","Type":"ContainerDied","Data":"f0574f2c0097bc69f4a4c95ddc2bb03e213f0fed2e56eaf98cfa2c71a6d7d97f"} Feb 19 21:20:03 crc kubenswrapper[4886]: I0219 21:20:03.838009 4886 generic.go:334] "Generic (PLEG): container finished" podID="f652b0b8-6eee-4ebb-b0ad-e22de89080a6" containerID="91dfd157308c118eb74b6d09d731bed9c28d03ea9fb702843e79541cda906dda" exitCode=0 Feb 19 21:20:03 crc kubenswrapper[4886]: I0219 21:20:03.838157 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wfngr" event={"ID":"f652b0b8-6eee-4ebb-b0ad-e22de89080a6","Type":"ContainerDied","Data":"91dfd157308c118eb74b6d09d731bed9c28d03ea9fb702843e79541cda906dda"} Feb 19 21:20:03 crc kubenswrapper[4886]: I0219 21:20:03.851476 4886 generic.go:334] "Generic (PLEG): container finished" podID="9741740a-a65c-49f5-8cdb-156b0d3037ec" containerID="06a3316e3dd4608d0a992b6f55507d230086191d7f9ea80b7e5b294d6bd9e786" exitCode=0 Feb 19 21:20:03 crc kubenswrapper[4886]: I0219 21:20:03.851513 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c1e3-account-create-update-s6vdn" event={"ID":"9741740a-a65c-49f5-8cdb-156b0d3037ec","Type":"ContainerDied","Data":"06a3316e3dd4608d0a992b6f55507d230086191d7f9ea80b7e5b294d6bd9e786"} Feb 19 21:20:04 crc kubenswrapper[4886]: I0219 21:20:04.385128 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9d9f-account-create-update-ld55p" Feb 19 21:20:04 crc kubenswrapper[4886]: I0219 21:20:04.393086 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cq5sp" Feb 19 21:20:04 crc kubenswrapper[4886]: I0219 21:20:04.414405 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t97sl\" (UniqueName: \"kubernetes.io/projected/6850c0dd-a8d1-4fa7-83d5-e224be6efcd4-kube-api-access-t97sl\") pod \"6850c0dd-a8d1-4fa7-83d5-e224be6efcd4\" (UID: \"6850c0dd-a8d1-4fa7-83d5-e224be6efcd4\") " Feb 19 21:20:04 crc kubenswrapper[4886]: I0219 21:20:04.414523 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6850c0dd-a8d1-4fa7-83d5-e224be6efcd4-operator-scripts\") pod \"6850c0dd-a8d1-4fa7-83d5-e224be6efcd4\" (UID: \"6850c0dd-a8d1-4fa7-83d5-e224be6efcd4\") " Feb 19 21:20:04 crc kubenswrapper[4886]: I0219 21:20:04.415277 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6850c0dd-a8d1-4fa7-83d5-e224be6efcd4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6850c0dd-a8d1-4fa7-83d5-e224be6efcd4" (UID: "6850c0dd-a8d1-4fa7-83d5-e224be6efcd4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:04 crc kubenswrapper[4886]: I0219 21:20:04.423515 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6850c0dd-a8d1-4fa7-83d5-e224be6efcd4-kube-api-access-t97sl" (OuterVolumeSpecName: "kube-api-access-t97sl") pod "6850c0dd-a8d1-4fa7-83d5-e224be6efcd4" (UID: "6850c0dd-a8d1-4fa7-83d5-e224be6efcd4"). InnerVolumeSpecName "kube-api-access-t97sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:20:04 crc kubenswrapper[4886]: I0219 21:20:04.516582 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnmsr\" (UniqueName: \"kubernetes.io/projected/248cf37f-1bcc-4904-bba2-8d0398f694df-kube-api-access-bnmsr\") pod \"248cf37f-1bcc-4904-bba2-8d0398f694df\" (UID: \"248cf37f-1bcc-4904-bba2-8d0398f694df\") " Feb 19 21:20:04 crc kubenswrapper[4886]: I0219 21:20:04.516766 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/248cf37f-1bcc-4904-bba2-8d0398f694df-operator-scripts\") pod \"248cf37f-1bcc-4904-bba2-8d0398f694df\" (UID: \"248cf37f-1bcc-4904-bba2-8d0398f694df\") " Feb 19 21:20:04 crc kubenswrapper[4886]: I0219 21:20:04.517233 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6850c0dd-a8d1-4fa7-83d5-e224be6efcd4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:04 crc kubenswrapper[4886]: I0219 21:20:04.517250 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t97sl\" (UniqueName: \"kubernetes.io/projected/6850c0dd-a8d1-4fa7-83d5-e224be6efcd4-kube-api-access-t97sl\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:04 crc kubenswrapper[4886]: I0219 21:20:04.517422 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/248cf37f-1bcc-4904-bba2-8d0398f694df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "248cf37f-1bcc-4904-bba2-8d0398f694df" (UID: "248cf37f-1bcc-4904-bba2-8d0398f694df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:04 crc kubenswrapper[4886]: I0219 21:20:04.525085 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/248cf37f-1bcc-4904-bba2-8d0398f694df-kube-api-access-bnmsr" (OuterVolumeSpecName: "kube-api-access-bnmsr") pod "248cf37f-1bcc-4904-bba2-8d0398f694df" (UID: "248cf37f-1bcc-4904-bba2-8d0398f694df"). InnerVolumeSpecName "kube-api-access-bnmsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:20:04 crc kubenswrapper[4886]: I0219 21:20:04.619400 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnmsr\" (UniqueName: \"kubernetes.io/projected/248cf37f-1bcc-4904-bba2-8d0398f694df-kube-api-access-bnmsr\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:04 crc kubenswrapper[4886]: I0219 21:20:04.619748 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/248cf37f-1bcc-4904-bba2-8d0398f694df-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:04 crc kubenswrapper[4886]: I0219 21:20:04.888017 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cq5sp" event={"ID":"248cf37f-1bcc-4904-bba2-8d0398f694df","Type":"ContainerDied","Data":"90638fbe482833aad4c6ba2cbdfe72c1eae1a85e36ba09f5e30f0c9caeece239"} Feb 19 21:20:04 crc kubenswrapper[4886]: I0219 21:20:04.888081 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90638fbe482833aad4c6ba2cbdfe72c1eae1a85e36ba09f5e30f0c9caeece239" Feb 19 21:20:04 crc kubenswrapper[4886]: I0219 21:20:04.888189 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cq5sp" Feb 19 21:20:04 crc kubenswrapper[4886]: I0219 21:20:04.893295 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9d9f-account-create-update-ld55p" event={"ID":"6850c0dd-a8d1-4fa7-83d5-e224be6efcd4","Type":"ContainerDied","Data":"eee12da8a3d1ec11f268774eaad7562d52442361dae61ca83bbc70bff3f2ecd0"} Feb 19 21:20:04 crc kubenswrapper[4886]: I0219 21:20:04.893353 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eee12da8a3d1ec11f268774eaad7562d52442361dae61ca83bbc70bff3f2ecd0" Feb 19 21:20:04 crc kubenswrapper[4886]: I0219 21:20:04.893438 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9d9f-account-create-update-ld55p" Feb 19 21:20:04 crc kubenswrapper[4886]: I0219 21:20:04.897911 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5fd4db44-9fcc-4954-9896-7f47be765647","Type":"ContainerStarted","Data":"cef5d4fcf79cf8afa74d8a6027ec9eef3e8e10d6817b78203ba264fcdcde8389"} Feb 19 21:20:05 crc kubenswrapper[4886]: I0219 21:20:05.345958 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-29fmc" Feb 19 21:20:05 crc kubenswrapper[4886]: I0219 21:20:05.436886 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-w68nz"] Feb 19 21:20:05 crc kubenswrapper[4886]: I0219 21:20:05.437249 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-w68nz" podUID="24f96e8e-1d05-40aa-ac1f-20450b541c44" containerName="dnsmasq-dns" containerID="cri-o://8db1c8185e9fd2c2375563d15ac049e3c2cd18b62ca586d08dcc6fafb3474d47" gracePeriod=10 Feb 19 21:20:05 crc kubenswrapper[4886]: I0219 21:20:05.911540 4886 generic.go:334] "Generic (PLEG): container finished" podID="24f96e8e-1d05-40aa-ac1f-20450b541c44" containerID="8db1c8185e9fd2c2375563d15ac049e3c2cd18b62ca586d08dcc6fafb3474d47" exitCode=0 Feb 19 21:20:05 crc kubenswrapper[4886]: I0219 21:20:05.911592 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-w68nz" event={"ID":"24f96e8e-1d05-40aa-ac1f-20450b541c44","Type":"ContainerDied","Data":"8db1c8185e9fd2c2375563d15ac049e3c2cd18b62ca586d08dcc6fafb3474d47"} Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.209500 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-ngwj9" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.210148 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5058-account-create-update-zsfmc" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.227363 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-98fw7" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.232293 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2b4a-account-create-update-zqq2h" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.232971 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wfngr" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.264304 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-js4vm" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.264652 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c1e3-account-create-update-s6vdn" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.410601 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjx8k\" (UniqueName: \"kubernetes.io/projected/8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a-kube-api-access-kjx8k\") pod \"8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a\" (UID: \"8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a\") " Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.410686 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16-operator-scripts\") pod \"4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16\" (UID: \"4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16\") " Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.410719 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f652b0b8-6eee-4ebb-b0ad-e22de89080a6-config-data\") pod \"f652b0b8-6eee-4ebb-b0ad-e22de89080a6\" (UID: \"f652b0b8-6eee-4ebb-b0ad-e22de89080a6\") " Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.410788 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f652b0b8-6eee-4ebb-b0ad-e22de89080a6-combined-ca-bundle\") pod \"f652b0b8-6eee-4ebb-b0ad-e22de89080a6\" (UID: \"f652b0b8-6eee-4ebb-b0ad-e22de89080a6\") " Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.410822 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a2f507c-d8b5-46b9-82f7-dd0e5be787c4-operator-scripts\") pod \"4a2f507c-d8b5-46b9-82f7-dd0e5be787c4\" (UID: \"4a2f507c-d8b5-46b9-82f7-dd0e5be787c4\") " Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.410857 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2xwj\" (UniqueName: \"kubernetes.io/projected/afb8222a-a662-49c8-89f5-7fe1f193adfd-kube-api-access-s2xwj\") pod \"afb8222a-a662-49c8-89f5-7fe1f193adfd\" (UID: \"afb8222a-a662-49c8-89f5-7fe1f193adfd\") " Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.410894 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4r8j\" (UniqueName: \"kubernetes.io/projected/f652b0b8-6eee-4ebb-b0ad-e22de89080a6-kube-api-access-x4r8j\") pod \"f652b0b8-6eee-4ebb-b0ad-e22de89080a6\" (UID: \"f652b0b8-6eee-4ebb-b0ad-e22de89080a6\") " Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.410913 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afb8222a-a662-49c8-89f5-7fe1f193adfd-operator-scripts\") pod \"afb8222a-a662-49c8-89f5-7fe1f193adfd\" (UID: \"afb8222a-a662-49c8-89f5-7fe1f193adfd\") " Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.410931 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjmkn\" (UniqueName: \"kubernetes.io/projected/4a2f507c-d8b5-46b9-82f7-dd0e5be787c4-kube-api-access-qjmkn\") pod \"4a2f507c-d8b5-46b9-82f7-dd0e5be787c4\" (UID: \"4a2f507c-d8b5-46b9-82f7-dd0e5be787c4\") " Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.410974 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22179eed-5249-458a-b35d-7f934c393c87-operator-scripts\") pod \"22179eed-5249-458a-b35d-7f934c393c87\" (UID: \"22179eed-5249-458a-b35d-7f934c393c87\") " Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.411014 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f92f9\" (UniqueName: \"kubernetes.io/projected/22179eed-5249-458a-b35d-7f934c393c87-kube-api-access-f92f9\") pod \"22179eed-5249-458a-b35d-7f934c393c87\" (UID: \"22179eed-5249-458a-b35d-7f934c393c87\") " Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.411033 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a-operator-scripts\") pod \"8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a\" (UID: \"8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a\") " Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.411091 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16" (UID: "4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.411103 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qpwh\" (UniqueName: \"kubernetes.io/projected/4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16-kube-api-access-2qpwh\") pod \"4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16\" (UID: \"4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16\") " Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.411208 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc4tl\" (UniqueName: \"kubernetes.io/projected/9741740a-a65c-49f5-8cdb-156b0d3037ec-kube-api-access-dc4tl\") pod \"9741740a-a65c-49f5-8cdb-156b0d3037ec\" (UID: \"9741740a-a65c-49f5-8cdb-156b0d3037ec\") " Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.411233 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9741740a-a65c-49f5-8cdb-156b0d3037ec-operator-scripts\") pod \"9741740a-a65c-49f5-8cdb-156b0d3037ec\" (UID: \"9741740a-a65c-49f5-8cdb-156b0d3037ec\") " Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.411307 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f652b0b8-6eee-4ebb-b0ad-e22de89080a6-db-sync-config-data\") pod \"f652b0b8-6eee-4ebb-b0ad-e22de89080a6\" (UID: \"f652b0b8-6eee-4ebb-b0ad-e22de89080a6\") " Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.412173 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.412517 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afb8222a-a662-49c8-89f5-7fe1f193adfd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "afb8222a-a662-49c8-89f5-7fe1f193adfd" (UID: "afb8222a-a662-49c8-89f5-7fe1f193adfd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.412703 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22179eed-5249-458a-b35d-7f934c393c87-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22179eed-5249-458a-b35d-7f934c393c87" (UID: "22179eed-5249-458a-b35d-7f934c393c87"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.412884 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a2f507c-d8b5-46b9-82f7-dd0e5be787c4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a2f507c-d8b5-46b9-82f7-dd0e5be787c4" (UID: "4a2f507c-d8b5-46b9-82f7-dd0e5be787c4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.413290 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a" (UID: "8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.413599 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9741740a-a65c-49f5-8cdb-156b0d3037ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9741740a-a65c-49f5-8cdb-156b0d3037ec" (UID: "9741740a-a65c-49f5-8cdb-156b0d3037ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.415183 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16-kube-api-access-2qpwh" (OuterVolumeSpecName: "kube-api-access-2qpwh") pod "4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16" (UID: "4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16"). InnerVolumeSpecName "kube-api-access-2qpwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.421116 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f652b0b8-6eee-4ebb-b0ad-e22de89080a6-kube-api-access-x4r8j" (OuterVolumeSpecName: "kube-api-access-x4r8j") pod "f652b0b8-6eee-4ebb-b0ad-e22de89080a6" (UID: "f652b0b8-6eee-4ebb-b0ad-e22de89080a6"). InnerVolumeSpecName "kube-api-access-x4r8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.423081 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22179eed-5249-458a-b35d-7f934c393c87-kube-api-access-f92f9" (OuterVolumeSpecName: "kube-api-access-f92f9") pod "22179eed-5249-458a-b35d-7f934c393c87" (UID: "22179eed-5249-458a-b35d-7f934c393c87"). InnerVolumeSpecName "kube-api-access-f92f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.423236 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a2f507c-d8b5-46b9-82f7-dd0e5be787c4-kube-api-access-qjmkn" (OuterVolumeSpecName: "kube-api-access-qjmkn") pod "4a2f507c-d8b5-46b9-82f7-dd0e5be787c4" (UID: "4a2f507c-d8b5-46b9-82f7-dd0e5be787c4"). InnerVolumeSpecName "kube-api-access-qjmkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.423380 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f652b0b8-6eee-4ebb-b0ad-e22de89080a6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f652b0b8-6eee-4ebb-b0ad-e22de89080a6" (UID: "f652b0b8-6eee-4ebb-b0ad-e22de89080a6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.423502 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a-kube-api-access-kjx8k" (OuterVolumeSpecName: "kube-api-access-kjx8k") pod "8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a" (UID: "8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a"). InnerVolumeSpecName "kube-api-access-kjx8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.424490 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9741740a-a65c-49f5-8cdb-156b0d3037ec-kube-api-access-dc4tl" (OuterVolumeSpecName: "kube-api-access-dc4tl") pod "9741740a-a65c-49f5-8cdb-156b0d3037ec" (UID: "9741740a-a65c-49f5-8cdb-156b0d3037ec"). InnerVolumeSpecName "kube-api-access-dc4tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.471115 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afb8222a-a662-49c8-89f5-7fe1f193adfd-kube-api-access-s2xwj" (OuterVolumeSpecName: "kube-api-access-s2xwj") pod "afb8222a-a662-49c8-89f5-7fe1f193adfd" (UID: "afb8222a-a662-49c8-89f5-7fe1f193adfd"). InnerVolumeSpecName "kube-api-access-s2xwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.479470 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f652b0b8-6eee-4ebb-b0ad-e22de89080a6-config-data" (OuterVolumeSpecName: "config-data") pod "f652b0b8-6eee-4ebb-b0ad-e22de89080a6" (UID: "f652b0b8-6eee-4ebb-b0ad-e22de89080a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.487152 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f652b0b8-6eee-4ebb-b0ad-e22de89080a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f652b0b8-6eee-4ebb-b0ad-e22de89080a6" (UID: "f652b0b8-6eee-4ebb-b0ad-e22de89080a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.500634 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-w68nz" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.514622 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjx8k\" (UniqueName: \"kubernetes.io/projected/8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a-kube-api-access-kjx8k\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.514659 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f652b0b8-6eee-4ebb-b0ad-e22de89080a6-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.514671 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f652b0b8-6eee-4ebb-b0ad-e22de89080a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.514679 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a2f507c-d8b5-46b9-82f7-dd0e5be787c4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.514689 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2xwj\" (UniqueName: \"kubernetes.io/projected/afb8222a-a662-49c8-89f5-7fe1f193adfd-kube-api-access-s2xwj\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.514697 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4r8j\" (UniqueName: \"kubernetes.io/projected/f652b0b8-6eee-4ebb-b0ad-e22de89080a6-kube-api-access-x4r8j\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.514707 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afb8222a-a662-49c8-89f5-7fe1f193adfd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.514715 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjmkn\" (UniqueName: \"kubernetes.io/projected/4a2f507c-d8b5-46b9-82f7-dd0e5be787c4-kube-api-access-qjmkn\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.514723 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22179eed-5249-458a-b35d-7f934c393c87-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.514731 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f92f9\" (UniqueName: \"kubernetes.io/projected/22179eed-5249-458a-b35d-7f934c393c87-kube-api-access-f92f9\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.514739 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.514747 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qpwh\" (UniqueName: \"kubernetes.io/projected/4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16-kube-api-access-2qpwh\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.514755 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc4tl\" (UniqueName: \"kubernetes.io/projected/9741740a-a65c-49f5-8cdb-156b0d3037ec-kube-api-access-dc4tl\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.514763 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9741740a-a65c-49f5-8cdb-156b0d3037ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.514772 4886 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f652b0b8-6eee-4ebb-b0ad-e22de89080a6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.615679 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24f96e8e-1d05-40aa-ac1f-20450b541c44-ovsdbserver-nb\") pod \"24f96e8e-1d05-40aa-ac1f-20450b541c44\" (UID: \"24f96e8e-1d05-40aa-ac1f-20450b541c44\") " Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.615735 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24f96e8e-1d05-40aa-ac1f-20450b541c44-ovsdbserver-sb\") pod \"24f96e8e-1d05-40aa-ac1f-20450b541c44\" (UID: \"24f96e8e-1d05-40aa-ac1f-20450b541c44\") " Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.615757 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw79h\" (UniqueName: \"kubernetes.io/projected/24f96e8e-1d05-40aa-ac1f-20450b541c44-kube-api-access-rw79h\") pod \"24f96e8e-1d05-40aa-ac1f-20450b541c44\" (UID: \"24f96e8e-1d05-40aa-ac1f-20450b541c44\") " Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.615780 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24f96e8e-1d05-40aa-ac1f-20450b541c44-config\") pod \"24f96e8e-1d05-40aa-ac1f-20450b541c44\" (UID: \"24f96e8e-1d05-40aa-ac1f-20450b541c44\") " Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.615825 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24f96e8e-1d05-40aa-ac1f-20450b541c44-dns-svc\") pod \"24f96e8e-1d05-40aa-ac1f-20450b541c44\" (UID: \"24f96e8e-1d05-40aa-ac1f-20450b541c44\") " Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.633914 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24f96e8e-1d05-40aa-ac1f-20450b541c44-kube-api-access-rw79h" (OuterVolumeSpecName: "kube-api-access-rw79h") pod "24f96e8e-1d05-40aa-ac1f-20450b541c44" (UID: "24f96e8e-1d05-40aa-ac1f-20450b541c44"). InnerVolumeSpecName "kube-api-access-rw79h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.662670 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24f96e8e-1d05-40aa-ac1f-20450b541c44-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "24f96e8e-1d05-40aa-ac1f-20450b541c44" (UID: "24f96e8e-1d05-40aa-ac1f-20450b541c44"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.673932 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24f96e8e-1d05-40aa-ac1f-20450b541c44-config" (OuterVolumeSpecName: "config") pod "24f96e8e-1d05-40aa-ac1f-20450b541c44" (UID: "24f96e8e-1d05-40aa-ac1f-20450b541c44"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.673996 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24f96e8e-1d05-40aa-ac1f-20450b541c44-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "24f96e8e-1d05-40aa-ac1f-20450b541c44" (UID: "24f96e8e-1d05-40aa-ac1f-20450b541c44"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.686100 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24f96e8e-1d05-40aa-ac1f-20450b541c44-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "24f96e8e-1d05-40aa-ac1f-20450b541c44" (UID: "24f96e8e-1d05-40aa-ac1f-20450b541c44"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.718783 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24f96e8e-1d05-40aa-ac1f-20450b541c44-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.718815 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24f96e8e-1d05-40aa-ac1f-20450b541c44-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.718824 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rw79h\" (UniqueName: \"kubernetes.io/projected/24f96e8e-1d05-40aa-ac1f-20450b541c44-kube-api-access-rw79h\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.718837 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24f96e8e-1d05-40aa-ac1f-20450b541c44-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.718845 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24f96e8e-1d05-40aa-ac1f-20450b541c44-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.938956 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-twxkg" event={"ID":"54149206-2ab9-4e9e-be08-86d91ea986f9","Type":"ContainerStarted","Data":"19f1d2fd61c1ad3bae5bd5494b56da7c2ba78579e811c788face7bf47acd332c"} Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.943240 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-w68nz" event={"ID":"24f96e8e-1d05-40aa-ac1f-20450b541c44","Type":"ContainerDied","Data":"8f211fce877a2c04f9b74b8f5565c98b589d43616c6c4b7b16bea309afb9feed"} Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.943356 4886 scope.go:117] "RemoveContainer" containerID="8db1c8185e9fd2c2375563d15ac049e3c2cd18b62ca586d08dcc6fafb3474d47" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.943375 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-w68nz" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.945447 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2b4a-account-create-update-zqq2h" event={"ID":"22179eed-5249-458a-b35d-7f934c393c87","Type":"ContainerDied","Data":"c644b6e9ecef1268c95c9aefcb2698d9163b2f47c92b9fbb0ebbbc9647522a43"} Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.945484 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c644b6e9ecef1268c95c9aefcb2698d9163b2f47c92b9fbb0ebbbc9647522a43" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.945535 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2b4a-account-create-update-zqq2h" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.965008 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-ngwj9" event={"ID":"afb8222a-a662-49c8-89f5-7fe1f193adfd","Type":"ContainerDied","Data":"b943ec8fc5af877ad65b647605a8bc15c951afd5c54e6a51329a4773811d769e"} Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.965048 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b943ec8fc5af877ad65b647605a8bc15c951afd5c54e6a51329a4773811d769e" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.965102 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-ngwj9" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.976661 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wfngr" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.978110 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wfngr" event={"ID":"f652b0b8-6eee-4ebb-b0ad-e22de89080a6","Type":"ContainerDied","Data":"65fd2daabd6a5a9c69af608c910d8b931ee7e670bed6b14fb6e5204ebc173bea"} Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.978152 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65fd2daabd6a5a9c69af608c910d8b931ee7e670bed6b14fb6e5204ebc173bea" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.980382 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-twxkg" podStartSLOduration=3.011727379 podStartE2EDuration="8.980361085s" podCreationTimestamp="2026-02-19 21:20:00 +0000 UTC" firstStartedPulling="2026-02-19 21:20:02.070385787 +0000 UTC m=+1232.698228837" lastFinishedPulling="2026-02-19 21:20:08.039019493 +0000 UTC m=+1238.666862543" observedRunningTime="2026-02-19 21:20:08.976450878 +0000 UTC m=+1239.604293928" watchObservedRunningTime="2026-02-19 21:20:08.980361085 +0000 UTC m=+1239.608204135" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.982024 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c1e3-account-create-update-s6vdn" event={"ID":"9741740a-a65c-49f5-8cdb-156b0d3037ec","Type":"ContainerDied","Data":"f101af5fccc49a53f0c01652c0944b7c0615198be7c71ae108c12d9855f46f31"} Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.982082 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f101af5fccc49a53f0c01652c0944b7c0615198be7c71ae108c12d9855f46f31" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.982176 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c1e3-account-create-update-s6vdn" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.989806 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5058-account-create-update-zsfmc" event={"ID":"4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16","Type":"ContainerDied","Data":"10d45526b5bdcc04138c112038cb32bccef80c97de9d7dc6c6c125038ea8c28e"} Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.989845 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10d45526b5bdcc04138c112038cb32bccef80c97de9d7dc6c6c125038ea8c28e" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.989910 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5058-account-create-update-zsfmc" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.994023 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-js4vm" event={"ID":"8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a","Type":"ContainerDied","Data":"d7717039c3d6ebedee2b794ace64b5c482b42d49448142c6d7341a6240e83161"} Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.994047 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7717039c3d6ebedee2b794ace64b5c482b42d49448142c6d7341a6240e83161" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.994090 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-js4vm" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.996661 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-98fw7" event={"ID":"4a2f507c-d8b5-46b9-82f7-dd0e5be787c4","Type":"ContainerDied","Data":"93bdf727a387f1df9247764a78bd9ab90f19d1fc30e39f84fb4c6754c2beb891"} Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.996684 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93bdf727a387f1df9247764a78bd9ab90f19d1fc30e39f84fb4c6754c2beb891" Feb 19 21:20:08 crc kubenswrapper[4886]: I0219 21:20:08.996716 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-98fw7" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.046955 4886 scope.go:117] "RemoveContainer" containerID="5dd7627d20ae35f0565f20b254e66f331f5a254c6ecfc1f57a5d1629745417f2" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.106179 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-w68nz"] Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.115366 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-w68nz"] Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.723929 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-xck4z"] Feb 19 21:20:09 crc kubenswrapper[4886]: E0219 21:20:09.724326 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afb8222a-a662-49c8-89f5-7fe1f193adfd" containerName="mariadb-database-create" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.724339 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="afb8222a-a662-49c8-89f5-7fe1f193adfd" containerName="mariadb-database-create" Feb 19 21:20:09 crc kubenswrapper[4886]: E0219 21:20:09.724355 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f652b0b8-6eee-4ebb-b0ad-e22de89080a6" containerName="glance-db-sync" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.724361 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f652b0b8-6eee-4ebb-b0ad-e22de89080a6" containerName="glance-db-sync" Feb 19 21:20:09 crc kubenswrapper[4886]: E0219 21:20:09.724373 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24f96e8e-1d05-40aa-ac1f-20450b541c44" containerName="dnsmasq-dns" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.724378 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="24f96e8e-1d05-40aa-ac1f-20450b541c44" containerName="dnsmasq-dns" Feb 19 21:20:09 crc kubenswrapper[4886]: E0219 21:20:09.724389 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9741740a-a65c-49f5-8cdb-156b0d3037ec" containerName="mariadb-account-create-update" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.724397 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9741740a-a65c-49f5-8cdb-156b0d3037ec" containerName="mariadb-account-create-update" Feb 19 21:20:09 crc kubenswrapper[4886]: E0219 21:20:09.724409 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24f96e8e-1d05-40aa-ac1f-20450b541c44" containerName="init" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.724416 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="24f96e8e-1d05-40aa-ac1f-20450b541c44" containerName="init" Feb 19 21:20:09 crc kubenswrapper[4886]: E0219 21:20:09.724428 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22179eed-5249-458a-b35d-7f934c393c87" containerName="mariadb-account-create-update" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.724434 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="22179eed-5249-458a-b35d-7f934c393c87" containerName="mariadb-account-create-update" Feb 19 21:20:09 crc kubenswrapper[4886]: E0219 21:20:09.724444 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6850c0dd-a8d1-4fa7-83d5-e224be6efcd4" containerName="mariadb-account-create-update" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.724450 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="6850c0dd-a8d1-4fa7-83d5-e224be6efcd4" containerName="mariadb-account-create-update" Feb 19 21:20:09 crc kubenswrapper[4886]: E0219 21:20:09.724456 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a" containerName="mariadb-database-create" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.724461 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a" containerName="mariadb-database-create" Feb 19 21:20:09 crc kubenswrapper[4886]: E0219 21:20:09.724469 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="248cf37f-1bcc-4904-bba2-8d0398f694df" containerName="mariadb-database-create" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.724475 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="248cf37f-1bcc-4904-bba2-8d0398f694df" containerName="mariadb-database-create" Feb 19 21:20:09 crc kubenswrapper[4886]: E0219 21:20:09.724493 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16" containerName="mariadb-account-create-update" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.724498 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16" containerName="mariadb-account-create-update" Feb 19 21:20:09 crc kubenswrapper[4886]: E0219 21:20:09.724515 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a2f507c-d8b5-46b9-82f7-dd0e5be787c4" containerName="mariadb-database-create" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.724520 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a2f507c-d8b5-46b9-82f7-dd0e5be787c4" containerName="mariadb-database-create" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.724678 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="22179eed-5249-458a-b35d-7f934c393c87" containerName="mariadb-account-create-update" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.724694 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a" containerName="mariadb-database-create" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.724705 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="6850c0dd-a8d1-4fa7-83d5-e224be6efcd4" containerName="mariadb-account-create-update" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.724715 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="afb8222a-a662-49c8-89f5-7fe1f193adfd" containerName="mariadb-database-create" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.724725 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16" containerName="mariadb-account-create-update" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.724735 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f652b0b8-6eee-4ebb-b0ad-e22de89080a6" containerName="glance-db-sync" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.724745 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="24f96e8e-1d05-40aa-ac1f-20450b541c44" containerName="dnsmasq-dns" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.724754 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a2f507c-d8b5-46b9-82f7-dd0e5be787c4" containerName="mariadb-database-create" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.724767 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="9741740a-a65c-49f5-8cdb-156b0d3037ec" containerName="mariadb-account-create-update" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.724777 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="248cf37f-1bcc-4904-bba2-8d0398f694df" containerName="mariadb-database-create" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.725801 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.753315 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-xck4z"] Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.760386 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-xck4z\" (UID: \"a61930df-09b7-4635-a89f-71207b2f4e01\") " pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.760434 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-xck4z\" (UID: \"a61930df-09b7-4635-a89f-71207b2f4e01\") " pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.760455 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksr9v\" (UniqueName: \"kubernetes.io/projected/a61930df-09b7-4635-a89f-71207b2f4e01-kube-api-access-ksr9v\") pod \"dnsmasq-dns-74f6bcbc87-xck4z\" (UID: \"a61930df-09b7-4635-a89f-71207b2f4e01\") " pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.760532 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-config\") pod \"dnsmasq-dns-74f6bcbc87-xck4z\" (UID: \"a61930df-09b7-4635-a89f-71207b2f4e01\") " pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.760553 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-xck4z\" (UID: \"a61930df-09b7-4635-a89f-71207b2f4e01\") " pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.760593 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-xck4z\" (UID: \"a61930df-09b7-4635-a89f-71207b2f4e01\") " pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.863086 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-config\") pod \"dnsmasq-dns-74f6bcbc87-xck4z\" (UID: \"a61930df-09b7-4635-a89f-71207b2f4e01\") " pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.863408 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-xck4z\" (UID: \"a61930df-09b7-4635-a89f-71207b2f4e01\") " pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.863525 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-xck4z\" (UID: \"a61930df-09b7-4635-a89f-71207b2f4e01\") " pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.863658 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-xck4z\" (UID: \"a61930df-09b7-4635-a89f-71207b2f4e01\") " pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.863732 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-xck4z\" (UID: \"a61930df-09b7-4635-a89f-71207b2f4e01\") " pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.863804 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksr9v\" (UniqueName: \"kubernetes.io/projected/a61930df-09b7-4635-a89f-71207b2f4e01-kube-api-access-ksr9v\") pod \"dnsmasq-dns-74f6bcbc87-xck4z\" (UID: \"a61930df-09b7-4635-a89f-71207b2f4e01\") " pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.863937 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-config\") pod \"dnsmasq-dns-74f6bcbc87-xck4z\" (UID: \"a61930df-09b7-4635-a89f-71207b2f4e01\") " pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.864486 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-xck4z\" (UID: \"a61930df-09b7-4635-a89f-71207b2f4e01\") " pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.865029 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-xck4z\" (UID: \"a61930df-09b7-4635-a89f-71207b2f4e01\") " pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.865165 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-xck4z\" (UID: \"a61930df-09b7-4635-a89f-71207b2f4e01\") " pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.865585 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-xck4z\" (UID: \"a61930df-09b7-4635-a89f-71207b2f4e01\") " pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" Feb 19 21:20:09 crc kubenswrapper[4886]: I0219 21:20:09.884205 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksr9v\" (UniqueName: \"kubernetes.io/projected/a61930df-09b7-4635-a89f-71207b2f4e01-kube-api-access-ksr9v\") pod \"dnsmasq-dns-74f6bcbc87-xck4z\" (UID: \"a61930df-09b7-4635-a89f-71207b2f4e01\") " pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" Feb 19 21:20:10 crc kubenswrapper[4886]: I0219 21:20:10.047867 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" Feb 19 21:20:10 crc kubenswrapper[4886]: I0219 21:20:10.528745 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-xck4z"] Feb 19 21:20:10 crc kubenswrapper[4886]: W0219 21:20:10.536800 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda61930df_09b7_4635_a89f_71207b2f4e01.slice/crio-a854a0f9d6f4f037efa55baec650aafa563d7bf08e5d57a4c32def73d976c327 WatchSource:0}: Error finding container a854a0f9d6f4f037efa55baec650aafa563d7bf08e5d57a4c32def73d976c327: Status 404 returned error can't find the container with id a854a0f9d6f4f037efa55baec650aafa563d7bf08e5d57a4c32def73d976c327 Feb 19 21:20:10 crc kubenswrapper[4886]: I0219 21:20:10.624825 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24f96e8e-1d05-40aa-ac1f-20450b541c44" path="/var/lib/kubelet/pods/24f96e8e-1d05-40aa-ac1f-20450b541c44/volumes" Feb 19 21:20:11 crc kubenswrapper[4886]: I0219 21:20:11.019146 4886 generic.go:334] "Generic (PLEG): container finished" podID="5fd4db44-9fcc-4954-9896-7f47be765647" containerID="cef5d4fcf79cf8afa74d8a6027ec9eef3e8e10d6817b78203ba264fcdcde8389" exitCode=0 Feb 19 21:20:11 crc kubenswrapper[4886]: I0219 21:20:11.019222 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5fd4db44-9fcc-4954-9896-7f47be765647","Type":"ContainerDied","Data":"cef5d4fcf79cf8afa74d8a6027ec9eef3e8e10d6817b78203ba264fcdcde8389"} Feb 19 21:20:11 crc kubenswrapper[4886]: I0219 21:20:11.020794 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" event={"ID":"a61930df-09b7-4635-a89f-71207b2f4e01","Type":"ContainerStarted","Data":"a854a0f9d6f4f037efa55baec650aafa563d7bf08e5d57a4c32def73d976c327"} Feb 19 21:20:12 crc kubenswrapper[4886]: I0219 21:20:12.041690 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5fd4db44-9fcc-4954-9896-7f47be765647","Type":"ContainerStarted","Data":"3a860ca07efd92ec0abf70f6d18fe01ac874bac35028390f4f088c8d939d3c84"} Feb 19 21:20:12 crc kubenswrapper[4886]: I0219 21:20:12.050533 4886 generic.go:334] "Generic (PLEG): container finished" podID="a61930df-09b7-4635-a89f-71207b2f4e01" containerID="f4c958e2421499efbf0a1cf7d5b85838951fb0fa50160e4ce0095d597d13f388" exitCode=0 Feb 19 21:20:12 crc kubenswrapper[4886]: I0219 21:20:12.050593 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" event={"ID":"a61930df-09b7-4635-a89f-71207b2f4e01","Type":"ContainerDied","Data":"f4c958e2421499efbf0a1cf7d5b85838951fb0fa50160e4ce0095d597d13f388"} Feb 19 21:20:13 crc kubenswrapper[4886]: I0219 21:20:13.064868 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" event={"ID":"a61930df-09b7-4635-a89f-71207b2f4e01","Type":"ContainerStarted","Data":"8401bd89ec5fc526c53acf5c36b87acf8d24d58843887688a033faa07cf3889d"} Feb 19 21:20:13 crc kubenswrapper[4886]: I0219 21:20:13.065681 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" Feb 19 21:20:13 crc kubenswrapper[4886]: I0219 21:20:13.089821 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" podStartSLOduration=4.089800327 podStartE2EDuration="4.089800327s" podCreationTimestamp="2026-02-19 21:20:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:20:13.082514347 +0000 UTC m=+1243.710357407" watchObservedRunningTime="2026-02-19 21:20:13.089800327 +0000 UTC m=+1243.717643387" Feb 19 21:20:14 crc kubenswrapper[4886]: I0219 21:20:14.074971 4886 generic.go:334] "Generic (PLEG): container finished" podID="54149206-2ab9-4e9e-be08-86d91ea986f9" containerID="19f1d2fd61c1ad3bae5bd5494b56da7c2ba78579e811c788face7bf47acd332c" exitCode=0 Feb 19 21:20:14 crc kubenswrapper[4886]: I0219 21:20:14.075042 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-twxkg" event={"ID":"54149206-2ab9-4e9e-be08-86d91ea986f9","Type":"ContainerDied","Data":"19f1d2fd61c1ad3bae5bd5494b56da7c2ba78579e811c788face7bf47acd332c"} Feb 19 21:20:15 crc kubenswrapper[4886]: I0219 21:20:15.086220 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5fd4db44-9fcc-4954-9896-7f47be765647","Type":"ContainerStarted","Data":"b98e1778acd95019c26d38d553d5439f5a76908fc7957face2c947cec9c18a12"} Feb 19 21:20:15 crc kubenswrapper[4886]: I0219 21:20:15.086611 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5fd4db44-9fcc-4954-9896-7f47be765647","Type":"ContainerStarted","Data":"b14c3884631b59dbcd2df35d5f87d6dfbca2015441849b8db42f0fac9fbe80fc"} Feb 19 21:20:15 crc kubenswrapper[4886]: I0219 21:20:15.119066 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=16.119047289 podStartE2EDuration="16.119047289s" podCreationTimestamp="2026-02-19 21:19:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:20:15.113758628 +0000 UTC m=+1245.741601678" watchObservedRunningTime="2026-02-19 21:20:15.119047289 +0000 UTC m=+1245.746890339" Feb 19 21:20:15 crc kubenswrapper[4886]: I0219 21:20:15.134636 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 19 21:20:15 crc kubenswrapper[4886]: I0219 21:20:15.137358 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 19 21:20:15 crc kubenswrapper[4886]: I0219 21:20:15.143112 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 19 21:20:15 crc kubenswrapper[4886]: I0219 21:20:15.500424 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-twxkg" Feb 19 21:20:15 crc kubenswrapper[4886]: I0219 21:20:15.574883 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54149206-2ab9-4e9e-be08-86d91ea986f9-config-data\") pod \"54149206-2ab9-4e9e-be08-86d91ea986f9\" (UID: \"54149206-2ab9-4e9e-be08-86d91ea986f9\") " Feb 19 21:20:15 crc kubenswrapper[4886]: I0219 21:20:15.574931 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zspsb\" (UniqueName: \"kubernetes.io/projected/54149206-2ab9-4e9e-be08-86d91ea986f9-kube-api-access-zspsb\") pod \"54149206-2ab9-4e9e-be08-86d91ea986f9\" (UID: \"54149206-2ab9-4e9e-be08-86d91ea986f9\") " Feb 19 21:20:15 crc kubenswrapper[4886]: I0219 21:20:15.574960 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54149206-2ab9-4e9e-be08-86d91ea986f9-combined-ca-bundle\") pod \"54149206-2ab9-4e9e-be08-86d91ea986f9\" (UID: \"54149206-2ab9-4e9e-be08-86d91ea986f9\") " Feb 19 21:20:15 crc kubenswrapper[4886]: I0219 21:20:15.588672 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54149206-2ab9-4e9e-be08-86d91ea986f9-kube-api-access-zspsb" (OuterVolumeSpecName: "kube-api-access-zspsb") pod "54149206-2ab9-4e9e-be08-86d91ea986f9" (UID: "54149206-2ab9-4e9e-be08-86d91ea986f9"). InnerVolumeSpecName "kube-api-access-zspsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:20:15 crc kubenswrapper[4886]: I0219 21:20:15.607091 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54149206-2ab9-4e9e-be08-86d91ea986f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54149206-2ab9-4e9e-be08-86d91ea986f9" (UID: "54149206-2ab9-4e9e-be08-86d91ea986f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:20:15 crc kubenswrapper[4886]: I0219 21:20:15.634550 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54149206-2ab9-4e9e-be08-86d91ea986f9-config-data" (OuterVolumeSpecName: "config-data") pod "54149206-2ab9-4e9e-be08-86d91ea986f9" (UID: "54149206-2ab9-4e9e-be08-86d91ea986f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:20:15 crc kubenswrapper[4886]: I0219 21:20:15.677846 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54149206-2ab9-4e9e-be08-86d91ea986f9-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:15 crc kubenswrapper[4886]: I0219 21:20:15.677883 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zspsb\" (UniqueName: \"kubernetes.io/projected/54149206-2ab9-4e9e-be08-86d91ea986f9-kube-api-access-zspsb\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:15 crc kubenswrapper[4886]: I0219 21:20:15.677895 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54149206-2ab9-4e9e-be08-86d91ea986f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.097450 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-twxkg" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.097447 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-twxkg" event={"ID":"54149206-2ab9-4e9e-be08-86d91ea986f9","Type":"ContainerDied","Data":"28c8f67eb4467b5be1115179f835e2bf37c2fd9e6575df20bc6760ab3190630e"} Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.098696 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28c8f67eb4467b5be1115179f835e2bf37c2fd9e6575df20bc6760ab3190630e" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.102057 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.363507 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-xck4z"] Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.363948 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" podUID="a61930df-09b7-4635-a89f-71207b2f4e01" containerName="dnsmasq-dns" containerID="cri-o://8401bd89ec5fc526c53acf5c36b87acf8d24d58843887688a033faa07cf3889d" gracePeriod=10 Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.404642 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-8gccj"] Feb 19 21:20:16 crc kubenswrapper[4886]: E0219 21:20:16.405069 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54149206-2ab9-4e9e-be08-86d91ea986f9" containerName="keystone-db-sync" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.405085 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="54149206-2ab9-4e9e-be08-86d91ea986f9" containerName="keystone-db-sync" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.405251 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="54149206-2ab9-4e9e-be08-86d91ea986f9" containerName="keystone-db-sync" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.406247 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-8gccj" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.433303 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-8gccj"] Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.455158 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vsht2"] Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.456692 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vsht2" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.462065 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.462315 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gw5vn" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.462456 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.462559 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.462674 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.495385 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-8gccj\" (UID: \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\") " pod="openstack/dnsmasq-dns-847c4cc679-8gccj" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.495444 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-8gccj\" (UID: \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\") " pod="openstack/dnsmasq-dns-847c4cc679-8gccj" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.495473 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-8gccj\" (UID: \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\") " pod="openstack/dnsmasq-dns-847c4cc679-8gccj" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.495527 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpgm6\" (UniqueName: \"kubernetes.io/projected/cf298b43-d3b1-44fd-a07c-5b6d475256b1-kube-api-access-jpgm6\") pod \"dnsmasq-dns-847c4cc679-8gccj\" (UID: \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\") " pod="openstack/dnsmasq-dns-847c4cc679-8gccj" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.495559 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-config\") pod \"dnsmasq-dns-847c4cc679-8gccj\" (UID: \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\") " pod="openstack/dnsmasq-dns-847c4cc679-8gccj" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.495600 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-dns-svc\") pod \"dnsmasq-dns-847c4cc679-8gccj\" (UID: \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\") " pod="openstack/dnsmasq-dns-847c4cc679-8gccj" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.538332 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vsht2"] Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.599028 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc8lk\" (UniqueName: \"kubernetes.io/projected/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-kube-api-access-hc8lk\") pod \"keystone-bootstrap-vsht2\" (UID: \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\") " pod="openstack/keystone-bootstrap-vsht2" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.599166 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-dns-svc\") pod \"dnsmasq-dns-847c4cc679-8gccj\" (UID: \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\") " pod="openstack/dnsmasq-dns-847c4cc679-8gccj" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.599302 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-credential-keys\") pod \"keystone-bootstrap-vsht2\" (UID: \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\") " pod="openstack/keystone-bootstrap-vsht2" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.599378 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-combined-ca-bundle\") pod \"keystone-bootstrap-vsht2\" (UID: \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\") " pod="openstack/keystone-bootstrap-vsht2" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.599456 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-8gccj\" (UID: \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\") " pod="openstack/dnsmasq-dns-847c4cc679-8gccj" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.599528 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-scripts\") pod \"keystone-bootstrap-vsht2\" (UID: \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\") " pod="openstack/keystone-bootstrap-vsht2" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.599590 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-fernet-keys\") pod \"keystone-bootstrap-vsht2\" (UID: \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\") " pod="openstack/keystone-bootstrap-vsht2" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.599656 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-8gccj\" (UID: \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\") " pod="openstack/dnsmasq-dns-847c4cc679-8gccj" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.599731 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-8gccj\" (UID: \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\") " pod="openstack/dnsmasq-dns-847c4cc679-8gccj" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.599814 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-config-data\") pod \"keystone-bootstrap-vsht2\" (UID: \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\") " pod="openstack/keystone-bootstrap-vsht2" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.599899 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpgm6\" (UniqueName: \"kubernetes.io/projected/cf298b43-d3b1-44fd-a07c-5b6d475256b1-kube-api-access-jpgm6\") pod \"dnsmasq-dns-847c4cc679-8gccj\" (UID: \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\") " pod="openstack/dnsmasq-dns-847c4cc679-8gccj" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.599980 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-config\") pod \"dnsmasq-dns-847c4cc679-8gccj\" (UID: \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\") " pod="openstack/dnsmasq-dns-847c4cc679-8gccj" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.609370 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-dns-svc\") pod \"dnsmasq-dns-847c4cc679-8gccj\" (UID: \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\") " pod="openstack/dnsmasq-dns-847c4cc679-8gccj" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.609939 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-8gccj\" (UID: \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\") " pod="openstack/dnsmasq-dns-847c4cc679-8gccj" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.610096 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-8gccj\" (UID: \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\") " pod="openstack/dnsmasq-dns-847c4cc679-8gccj" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.610707 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-8gccj\" (UID: \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\") " pod="openstack/dnsmasq-dns-847c4cc679-8gccj" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.616958 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-config\") pod \"dnsmasq-dns-847c4cc679-8gccj\" (UID: \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\") " pod="openstack/dnsmasq-dns-847c4cc679-8gccj" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.667987 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-x27rt"] Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.669131 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-x27rt" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.681648 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-qs8bc" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.681865 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.684847 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpgm6\" (UniqueName: \"kubernetes.io/projected/cf298b43-d3b1-44fd-a07c-5b6d475256b1-kube-api-access-jpgm6\") pod \"dnsmasq-dns-847c4cc679-8gccj\" (UID: \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\") " pod="openstack/dnsmasq-dns-847c4cc679-8gccj" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.704674 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-config-data\") pod \"keystone-bootstrap-vsht2\" (UID: \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\") " pod="openstack/keystone-bootstrap-vsht2" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.704804 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc8lk\" (UniqueName: \"kubernetes.io/projected/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-kube-api-access-hc8lk\") pod \"keystone-bootstrap-vsht2\" (UID: \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\") " pod="openstack/keystone-bootstrap-vsht2" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.704909 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-credential-keys\") pod \"keystone-bootstrap-vsht2\" (UID: \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\") " pod="openstack/keystone-bootstrap-vsht2" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.704930 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-combined-ca-bundle\") pod \"keystone-bootstrap-vsht2\" (UID: \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\") " pod="openstack/keystone-bootstrap-vsht2" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.704959 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-scripts\") pod \"keystone-bootstrap-vsht2\" (UID: \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\") " pod="openstack/keystone-bootstrap-vsht2" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.704991 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-fernet-keys\") pod \"keystone-bootstrap-vsht2\" (UID: \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\") " pod="openstack/keystone-bootstrap-vsht2" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.711213 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-fernet-keys\") pod \"keystone-bootstrap-vsht2\" (UID: \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\") " pod="openstack/keystone-bootstrap-vsht2" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.725831 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-credential-keys\") pod \"keystone-bootstrap-vsht2\" (UID: \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\") " pod="openstack/keystone-bootstrap-vsht2" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.726284 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-8gccj" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.743212 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-combined-ca-bundle\") pod \"keystone-bootstrap-vsht2\" (UID: \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\") " pod="openstack/keystone-bootstrap-vsht2" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.744441 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-config-data\") pod \"keystone-bootstrap-vsht2\" (UID: \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\") " pod="openstack/keystone-bootstrap-vsht2" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.761808 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-scripts\") pod \"keystone-bootstrap-vsht2\" (UID: \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\") " pod="openstack/keystone-bootstrap-vsht2" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.774188 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc8lk\" (UniqueName: \"kubernetes.io/projected/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-kube-api-access-hc8lk\") pod \"keystone-bootstrap-vsht2\" (UID: \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\") " pod="openstack/keystone-bootstrap-vsht2" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.807472 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/336b4fc8-890f-4ace-baa3-587ebc3b27db-config-data\") pod \"heat-db-sync-x27rt\" (UID: \"336b4fc8-890f-4ace-baa3-587ebc3b27db\") " pod="openstack/heat-db-sync-x27rt" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.807532 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/336b4fc8-890f-4ace-baa3-587ebc3b27db-combined-ca-bundle\") pod \"heat-db-sync-x27rt\" (UID: \"336b4fc8-890f-4ace-baa3-587ebc3b27db\") " pod="openstack/heat-db-sync-x27rt" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.807595 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lgkd\" (UniqueName: \"kubernetes.io/projected/336b4fc8-890f-4ace-baa3-587ebc3b27db-kube-api-access-7lgkd\") pod \"heat-db-sync-x27rt\" (UID: \"336b4fc8-890f-4ace-baa3-587ebc3b27db\") " pod="openstack/heat-db-sync-x27rt" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.822309 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-x27rt"] Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.909649 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/336b4fc8-890f-4ace-baa3-587ebc3b27db-config-data\") pod \"heat-db-sync-x27rt\" (UID: \"336b4fc8-890f-4ace-baa3-587ebc3b27db\") " pod="openstack/heat-db-sync-x27rt" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.909715 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/336b4fc8-890f-4ace-baa3-587ebc3b27db-combined-ca-bundle\") pod \"heat-db-sync-x27rt\" (UID: \"336b4fc8-890f-4ace-baa3-587ebc3b27db\") " pod="openstack/heat-db-sync-x27rt" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.909780 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lgkd\" (UniqueName: \"kubernetes.io/projected/336b4fc8-890f-4ace-baa3-587ebc3b27db-kube-api-access-7lgkd\") pod \"heat-db-sync-x27rt\" (UID: \"336b4fc8-890f-4ace-baa3-587ebc3b27db\") " pod="openstack/heat-db-sync-x27rt" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.917319 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/336b4fc8-890f-4ace-baa3-587ebc3b27db-combined-ca-bundle\") pod \"heat-db-sync-x27rt\" (UID: \"336b4fc8-890f-4ace-baa3-587ebc3b27db\") " pod="openstack/heat-db-sync-x27rt" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.917823 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/336b4fc8-890f-4ace-baa3-587ebc3b27db-config-data\") pod \"heat-db-sync-x27rt\" (UID: \"336b4fc8-890f-4ace-baa3-587ebc3b27db\") " pod="openstack/heat-db-sync-x27rt" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.927812 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vsht2" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.929832 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-rlz62"] Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.944470 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rlz62" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.950807 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lgkd\" (UniqueName: \"kubernetes.io/projected/336b4fc8-890f-4ace-baa3-587ebc3b27db-kube-api-access-7lgkd\") pod \"heat-db-sync-x27rt\" (UID: \"336b4fc8-890f-4ace-baa3-587ebc3b27db\") " pod="openstack/heat-db-sync-x27rt" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.956318 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.962866 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rlz62"] Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.966493 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rsk69" Feb 19 21:20:16 crc kubenswrapper[4886]: I0219 21:20:16.967614 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.014279 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/956c70ec-60b5-4909-b686-66971581b168-config-data\") pod \"cinder-db-sync-rlz62\" (UID: \"956c70ec-60b5-4909-b686-66971581b168\") " pod="openstack/cinder-db-sync-rlz62" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.014359 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/956c70ec-60b5-4909-b686-66971581b168-scripts\") pod \"cinder-db-sync-rlz62\" (UID: \"956c70ec-60b5-4909-b686-66971581b168\") " pod="openstack/cinder-db-sync-rlz62" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.014419 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/956c70ec-60b5-4909-b686-66971581b168-combined-ca-bundle\") pod \"cinder-db-sync-rlz62\" (UID: \"956c70ec-60b5-4909-b686-66971581b168\") " pod="openstack/cinder-db-sync-rlz62" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.014558 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/956c70ec-60b5-4909-b686-66971581b168-etc-machine-id\") pod \"cinder-db-sync-rlz62\" (UID: \"956c70ec-60b5-4909-b686-66971581b168\") " pod="openstack/cinder-db-sync-rlz62" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.014672 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/956c70ec-60b5-4909-b686-66971581b168-db-sync-config-data\") pod \"cinder-db-sync-rlz62\" (UID: \"956c70ec-60b5-4909-b686-66971581b168\") " pod="openstack/cinder-db-sync-rlz62" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.014722 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcf8p\" (UniqueName: \"kubernetes.io/projected/956c70ec-60b5-4909-b686-66971581b168-kube-api-access-hcf8p\") pod \"cinder-db-sync-rlz62\" (UID: \"956c70ec-60b5-4909-b686-66971581b168\") " pod="openstack/cinder-db-sync-rlz62" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.093322 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-jjrqx"] Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.095431 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jjrqx" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.110030 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.110249 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.110413 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-tcw9n" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.116890 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/956c70ec-60b5-4909-b686-66971581b168-db-sync-config-data\") pod \"cinder-db-sync-rlz62\" (UID: \"956c70ec-60b5-4909-b686-66971581b168\") " pod="openstack/cinder-db-sync-rlz62" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.116965 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcf8p\" (UniqueName: \"kubernetes.io/projected/956c70ec-60b5-4909-b686-66971581b168-kube-api-access-hcf8p\") pod \"cinder-db-sync-rlz62\" (UID: \"956c70ec-60b5-4909-b686-66971581b168\") " pod="openstack/cinder-db-sync-rlz62" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.116998 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/956c70ec-60b5-4909-b686-66971581b168-config-data\") pod \"cinder-db-sync-rlz62\" (UID: \"956c70ec-60b5-4909-b686-66971581b168\") " pod="openstack/cinder-db-sync-rlz62" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.117020 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/956c70ec-60b5-4909-b686-66971581b168-scripts\") pod \"cinder-db-sync-rlz62\" (UID: \"956c70ec-60b5-4909-b686-66971581b168\") " pod="openstack/cinder-db-sync-rlz62" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.117056 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/956c70ec-60b5-4909-b686-66971581b168-combined-ca-bundle\") pod \"cinder-db-sync-rlz62\" (UID: \"956c70ec-60b5-4909-b686-66971581b168\") " pod="openstack/cinder-db-sync-rlz62" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.117131 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/956c70ec-60b5-4909-b686-66971581b168-etc-machine-id\") pod \"cinder-db-sync-rlz62\" (UID: \"956c70ec-60b5-4909-b686-66971581b168\") " pod="openstack/cinder-db-sync-rlz62" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.117249 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/956c70ec-60b5-4909-b686-66971581b168-etc-machine-id\") pod \"cinder-db-sync-rlz62\" (UID: \"956c70ec-60b5-4909-b686-66971581b168\") " pod="openstack/cinder-db-sync-rlz62" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.123242 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/956c70ec-60b5-4909-b686-66971581b168-config-data\") pod \"cinder-db-sync-rlz62\" (UID: \"956c70ec-60b5-4909-b686-66971581b168\") " pod="openstack/cinder-db-sync-rlz62" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.139224 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/956c70ec-60b5-4909-b686-66971581b168-combined-ca-bundle\") pod \"cinder-db-sync-rlz62\" (UID: \"956c70ec-60b5-4909-b686-66971581b168\") " pod="openstack/cinder-db-sync-rlz62" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.242872 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f14cfdd-608a-42ab-9195-b9773729d874-combined-ca-bundle\") pod \"neutron-db-sync-jjrqx\" (UID: \"9f14cfdd-608a-42ab-9195-b9773729d874\") " pod="openstack/neutron-db-sync-jjrqx" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.242942 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f14cfdd-608a-42ab-9195-b9773729d874-config\") pod \"neutron-db-sync-jjrqx\" (UID: \"9f14cfdd-608a-42ab-9195-b9773729d874\") " pod="openstack/neutron-db-sync-jjrqx" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.243076 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwkkb\" (UniqueName: \"kubernetes.io/projected/9f14cfdd-608a-42ab-9195-b9773729d874-kube-api-access-zwkkb\") pod \"neutron-db-sync-jjrqx\" (UID: \"9f14cfdd-608a-42ab-9195-b9773729d874\") " pod="openstack/neutron-db-sync-jjrqx" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.246796 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/956c70ec-60b5-4909-b686-66971581b168-db-sync-config-data\") pod \"cinder-db-sync-rlz62\" (UID: \"956c70ec-60b5-4909-b686-66971581b168\") " pod="openstack/cinder-db-sync-rlz62" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.253510 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcf8p\" (UniqueName: \"kubernetes.io/projected/956c70ec-60b5-4909-b686-66971581b168-kube-api-access-hcf8p\") pod \"cinder-db-sync-rlz62\" (UID: \"956c70ec-60b5-4909-b686-66971581b168\") " pod="openstack/cinder-db-sync-rlz62" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.253824 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/956c70ec-60b5-4909-b686-66971581b168-scripts\") pod \"cinder-db-sync-rlz62\" (UID: \"956c70ec-60b5-4909-b686-66971581b168\") " pod="openstack/cinder-db-sync-rlz62" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.274491 4886 generic.go:334] "Generic (PLEG): container finished" podID="a61930df-09b7-4635-a89f-71207b2f4e01" containerID="8401bd89ec5fc526c53acf5c36b87acf8d24d58843887688a033faa07cf3889d" exitCode=0 Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.274885 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" event={"ID":"a61930df-09b7-4635-a89f-71207b2f4e01","Type":"ContainerDied","Data":"8401bd89ec5fc526c53acf5c36b87acf8d24d58843887688a033faa07cf3889d"} Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.289142 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-x27rt" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.322410 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rlz62" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.322821 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-jjrqx"] Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.344394 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f14cfdd-608a-42ab-9195-b9773729d874-combined-ca-bundle\") pod \"neutron-db-sync-jjrqx\" (UID: \"9f14cfdd-608a-42ab-9195-b9773729d874\") " pod="openstack/neutron-db-sync-jjrqx" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.344474 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f14cfdd-608a-42ab-9195-b9773729d874-config\") pod \"neutron-db-sync-jjrqx\" (UID: \"9f14cfdd-608a-42ab-9195-b9773729d874\") " pod="openstack/neutron-db-sync-jjrqx" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.344616 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwkkb\" (UniqueName: \"kubernetes.io/projected/9f14cfdd-608a-42ab-9195-b9773729d874-kube-api-access-zwkkb\") pod \"neutron-db-sync-jjrqx\" (UID: \"9f14cfdd-608a-42ab-9195-b9773729d874\") " pod="openstack/neutron-db-sync-jjrqx" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.353489 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f14cfdd-608a-42ab-9195-b9773729d874-config\") pod \"neutron-db-sync-jjrqx\" (UID: \"9f14cfdd-608a-42ab-9195-b9773729d874\") " pod="openstack/neutron-db-sync-jjrqx" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.361364 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f14cfdd-608a-42ab-9195-b9773729d874-combined-ca-bundle\") pod \"neutron-db-sync-jjrqx\" (UID: \"9f14cfdd-608a-42ab-9195-b9773729d874\") " pod="openstack/neutron-db-sync-jjrqx" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.394731 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwkkb\" (UniqueName: \"kubernetes.io/projected/9f14cfdd-608a-42ab-9195-b9773729d874-kube-api-access-zwkkb\") pod \"neutron-db-sync-jjrqx\" (UID: \"9f14cfdd-608a-42ab-9195-b9773729d874\") " pod="openstack/neutron-db-sync-jjrqx" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.405821 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-8gccj"] Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.425029 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-4rq5v"] Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.426423 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4rq5v" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.438614 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.438768 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.438861 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2vjq6" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.446010 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-combined-ca-bundle\") pod \"placement-db-sync-4rq5v\" (UID: \"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7\") " pod="openstack/placement-db-sync-4rq5v" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.446113 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-config-data\") pod \"placement-db-sync-4rq5v\" (UID: \"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7\") " pod="openstack/placement-db-sync-4rq5v" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.446174 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-scripts\") pod \"placement-db-sync-4rq5v\" (UID: \"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7\") " pod="openstack/placement-db-sync-4rq5v" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.446194 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-logs\") pod \"placement-db-sync-4rq5v\" (UID: \"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7\") " pod="openstack/placement-db-sync-4rq5v" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.446219 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jwvl\" (UniqueName: \"kubernetes.io/projected/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-kube-api-access-5jwvl\") pod \"placement-db-sync-4rq5v\" (UID: \"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7\") " pod="openstack/placement-db-sync-4rq5v" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.487106 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-xmr6n"] Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.488536 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xmr6n" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.493225 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-bdpd4" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.498732 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.508978 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-4rq5v"] Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.536477 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xmr6n"] Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.549357 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-kts74"] Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.550526 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-scripts\") pod \"placement-db-sync-4rq5v\" (UID: \"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7\") " pod="openstack/placement-db-sync-4rq5v" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.550570 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-logs\") pod \"placement-db-sync-4rq5v\" (UID: \"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7\") " pod="openstack/placement-db-sync-4rq5v" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.550594 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d1e69b-5951-43d6-a54b-c73956bb3356-combined-ca-bundle\") pod \"barbican-db-sync-xmr6n\" (UID: \"83d1e69b-5951-43d6-a54b-c73956bb3356\") " pod="openstack/barbican-db-sync-xmr6n" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.550623 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jwvl\" (UniqueName: \"kubernetes.io/projected/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-kube-api-access-5jwvl\") pod \"placement-db-sync-4rq5v\" (UID: \"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7\") " pod="openstack/placement-db-sync-4rq5v" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.550705 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-combined-ca-bundle\") pod \"placement-db-sync-4rq5v\" (UID: \"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7\") " pod="openstack/placement-db-sync-4rq5v" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.550727 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/83d1e69b-5951-43d6-a54b-c73956bb3356-db-sync-config-data\") pod \"barbican-db-sync-xmr6n\" (UID: \"83d1e69b-5951-43d6-a54b-c73956bb3356\") " pod="openstack/barbican-db-sync-xmr6n" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.550796 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgcch\" (UniqueName: \"kubernetes.io/projected/83d1e69b-5951-43d6-a54b-c73956bb3356-kube-api-access-kgcch\") pod \"barbican-db-sync-xmr6n\" (UID: \"83d1e69b-5951-43d6-a54b-c73956bb3356\") " pod="openstack/barbican-db-sync-xmr6n" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.550815 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-config-data\") pod \"placement-db-sync-4rq5v\" (UID: \"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7\") " pod="openstack/placement-db-sync-4rq5v" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.551091 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.554169 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-config-data\") pod \"placement-db-sync-4rq5v\" (UID: \"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7\") " pod="openstack/placement-db-sync-4rq5v" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.557849 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-combined-ca-bundle\") pod \"placement-db-sync-4rq5v\" (UID: \"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7\") " pod="openstack/placement-db-sync-4rq5v" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.558121 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-logs\") pod \"placement-db-sync-4rq5v\" (UID: \"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7\") " pod="openstack/placement-db-sync-4rq5v" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.558187 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-scripts\") pod \"placement-db-sync-4rq5v\" (UID: \"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7\") " pod="openstack/placement-db-sync-4rq5v" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.563538 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-kts74"] Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.574505 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jjrqx" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.597392 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jwvl\" (UniqueName: \"kubernetes.io/projected/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-kube-api-access-5jwvl\") pod \"placement-db-sync-4rq5v\" (UID: \"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7\") " pod="openstack/placement-db-sync-4rq5v" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.625287 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.630798 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.637133 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.637460 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.637699 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.654771 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-run-httpd\") pod \"ceilometer-0\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " pod="openstack/ceilometer-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.654823 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-scripts\") pod \"ceilometer-0\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " pod="openstack/ceilometer-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.654855 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-kts74\" (UID: \"9ccc4c92-a3b0-47ea-9620-830916d087ab\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.654918 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgcch\" (UniqueName: \"kubernetes.io/projected/83d1e69b-5951-43d6-a54b-c73956bb3356-kube-api-access-kgcch\") pod \"barbican-db-sync-xmr6n\" (UID: \"83d1e69b-5951-43d6-a54b-c73956bb3356\") " pod="openstack/barbican-db-sync-xmr6n" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.654938 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-config\") pod \"dnsmasq-dns-785d8bcb8c-kts74\" (UID: \"9ccc4c92-a3b0-47ea-9620-830916d087ab\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.654962 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-kts74\" (UID: \"9ccc4c92-a3b0-47ea-9620-830916d087ab\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.655032 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d1e69b-5951-43d6-a54b-c73956bb3356-combined-ca-bundle\") pod \"barbican-db-sync-xmr6n\" (UID: \"83d1e69b-5951-43d6-a54b-c73956bb3356\") " pod="openstack/barbican-db-sync-xmr6n" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.655058 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-kts74\" (UID: \"9ccc4c92-a3b0-47ea-9620-830916d087ab\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.655078 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-kts74\" (UID: \"9ccc4c92-a3b0-47ea-9620-830916d087ab\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.655111 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j46rl\" (UniqueName: \"kubernetes.io/projected/9ccc4c92-a3b0-47ea-9620-830916d087ab-kube-api-access-j46rl\") pod \"dnsmasq-dns-785d8bcb8c-kts74\" (UID: \"9ccc4c92-a3b0-47ea-9620-830916d087ab\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.655144 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " pod="openstack/ceilometer-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.655163 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-config-data\") pod \"ceilometer-0\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " pod="openstack/ceilometer-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.655184 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx782\" (UniqueName: \"kubernetes.io/projected/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-kube-api-access-xx782\") pod \"ceilometer-0\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " pod="openstack/ceilometer-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.655215 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " pod="openstack/ceilometer-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.655231 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-log-httpd\") pod \"ceilometer-0\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " pod="openstack/ceilometer-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.655252 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/83d1e69b-5951-43d6-a54b-c73956bb3356-db-sync-config-data\") pod \"barbican-db-sync-xmr6n\" (UID: \"83d1e69b-5951-43d6-a54b-c73956bb3356\") " pod="openstack/barbican-db-sync-xmr6n" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.660731 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/83d1e69b-5951-43d6-a54b-c73956bb3356-db-sync-config-data\") pod \"barbican-db-sync-xmr6n\" (UID: \"83d1e69b-5951-43d6-a54b-c73956bb3356\") " pod="openstack/barbican-db-sync-xmr6n" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.671331 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d1e69b-5951-43d6-a54b-c73956bb3356-combined-ca-bundle\") pod \"barbican-db-sync-xmr6n\" (UID: \"83d1e69b-5951-43d6-a54b-c73956bb3356\") " pod="openstack/barbican-db-sync-xmr6n" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.675481 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgcch\" (UniqueName: \"kubernetes.io/projected/83d1e69b-5951-43d6-a54b-c73956bb3356-kube-api-access-kgcch\") pod \"barbican-db-sync-xmr6n\" (UID: \"83d1e69b-5951-43d6-a54b-c73956bb3356\") " pod="openstack/barbican-db-sync-xmr6n" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.687871 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.689602 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.696755 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.697809 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.697828 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.697948 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jvnwk" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.706329 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.714636 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-8gccj"] Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.759559 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-kts74\" (UID: \"9ccc4c92-a3b0-47ea-9620-830916d087ab\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.759895 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-config\") pod \"dnsmasq-dns-785d8bcb8c-kts74\" (UID: \"9ccc4c92-a3b0-47ea-9620-830916d087ab\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.759924 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-kts74\" (UID: \"9ccc4c92-a3b0-47ea-9620-830916d087ab\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.759993 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-kts74\" (UID: \"9ccc4c92-a3b0-47ea-9620-830916d087ab\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.760011 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-kts74\" (UID: \"9ccc4c92-a3b0-47ea-9620-830916d087ab\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.760047 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j46rl\" (UniqueName: \"kubernetes.io/projected/9ccc4c92-a3b0-47ea-9620-830916d087ab-kube-api-access-j46rl\") pod \"dnsmasq-dns-785d8bcb8c-kts74\" (UID: \"9ccc4c92-a3b0-47ea-9620-830916d087ab\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.760069 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " pod="openstack/ceilometer-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.760089 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-config-data\") pod \"ceilometer-0\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " pod="openstack/ceilometer-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.762135 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-kts74\" (UID: \"9ccc4c92-a3b0-47ea-9620-830916d087ab\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.762892 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-config\") pod \"dnsmasq-dns-785d8bcb8c-kts74\" (UID: \"9ccc4c92-a3b0-47ea-9620-830916d087ab\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.763114 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-kts74\" (UID: \"9ccc4c92-a3b0-47ea-9620-830916d087ab\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.763655 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-kts74\" (UID: \"9ccc4c92-a3b0-47ea-9620-830916d087ab\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.770920 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-kts74\" (UID: \"9ccc4c92-a3b0-47ea-9620-830916d087ab\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.760109 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx782\" (UniqueName: \"kubernetes.io/projected/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-kube-api-access-xx782\") pod \"ceilometer-0\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " pod="openstack/ceilometer-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.772674 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " pod="openstack/ceilometer-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.772704 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-log-httpd\") pod \"ceilometer-0\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " pod="openstack/ceilometer-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.772770 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-run-httpd\") pod \"ceilometer-0\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " pod="openstack/ceilometer-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.772833 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-scripts\") pod \"ceilometer-0\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " pod="openstack/ceilometer-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.774881 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-log-httpd\") pod \"ceilometer-0\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " pod="openstack/ceilometer-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.772545 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-config-data\") pod \"ceilometer-0\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " pod="openstack/ceilometer-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.775309 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-run-httpd\") pod \"ceilometer-0\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " pod="openstack/ceilometer-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.779877 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " pod="openstack/ceilometer-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.785904 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j46rl\" (UniqueName: \"kubernetes.io/projected/9ccc4c92-a3b0-47ea-9620-830916d087ab-kube-api-access-j46rl\") pod \"dnsmasq-dns-785d8bcb8c-kts74\" (UID: \"9ccc4c92-a3b0-47ea-9620-830916d087ab\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.786687 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xmr6n" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.789159 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-scripts\") pod \"ceilometer-0\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " pod="openstack/ceilometer-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.790178 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.793488 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " pod="openstack/ceilometer-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.796085 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx782\" (UniqueName: \"kubernetes.io/projected/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-kube-api-access-xx782\") pod \"ceilometer-0\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " pod="openstack/ceilometer-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.806831 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4rq5v" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.808452 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.826969 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 21:20:17 crc kubenswrapper[4886]: E0219 21:20:17.827445 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61930df-09b7-4635-a89f-71207b2f4e01" containerName="init" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.827461 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61930df-09b7-4635-a89f-71207b2f4e01" containerName="init" Feb 19 21:20:17 crc kubenswrapper[4886]: E0219 21:20:17.827484 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61930df-09b7-4635-a89f-71207b2f4e01" containerName="dnsmasq-dns" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.827491 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61930df-09b7-4635-a89f-71207b2f4e01" containerName="dnsmasq-dns" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.827703 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="a61930df-09b7-4635-a89f-71207b2f4e01" containerName="dnsmasq-dns" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.834119 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.845280 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.847233 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.847921 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.868165 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.875151 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\") pod \"glance-default-external-api-0\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.875211 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b4b3b88-fd86-414c-a709-6ebb939c03b7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.875281 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b4b3b88-fd86-414c-a709-6ebb939c03b7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.875312 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b4b3b88-fd86-414c-a709-6ebb939c03b7-config-data\") pod \"glance-default-external-api-0\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.875338 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b4b3b88-fd86-414c-a709-6ebb939c03b7-logs\") pod \"glance-default-external-api-0\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.875364 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b4b3b88-fd86-414c-a709-6ebb939c03b7-scripts\") pod \"glance-default-external-api-0\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.875432 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gkkk\" (UniqueName: \"kubernetes.io/projected/1b4b3b88-fd86-414c-a709-6ebb939c03b7-kube-api-access-4gkkk\") pod \"glance-default-external-api-0\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.875453 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b4b3b88-fd86-414c-a709-6ebb939c03b7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.960107 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vsht2"] Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.977341 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-ovsdbserver-sb\") pod \"a61930df-09b7-4635-a89f-71207b2f4e01\" (UID: \"a61930df-09b7-4635-a89f-71207b2f4e01\") " Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.977391 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-config\") pod \"a61930df-09b7-4635-a89f-71207b2f4e01\" (UID: \"a61930df-09b7-4635-a89f-71207b2f4e01\") " Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.977737 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-dns-swift-storage-0\") pod \"a61930df-09b7-4635-a89f-71207b2f4e01\" (UID: \"a61930df-09b7-4635-a89f-71207b2f4e01\") " Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.977878 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-ovsdbserver-nb\") pod \"a61930df-09b7-4635-a89f-71207b2f4e01\" (UID: \"a61930df-09b7-4635-a89f-71207b2f4e01\") " Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.977896 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-dns-svc\") pod \"a61930df-09b7-4635-a89f-71207b2f4e01\" (UID: \"a61930df-09b7-4635-a89f-71207b2f4e01\") " Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.978499 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksr9v\" (UniqueName: \"kubernetes.io/projected/a61930df-09b7-4635-a89f-71207b2f4e01-kube-api-access-ksr9v\") pod \"a61930df-09b7-4635-a89f-71207b2f4e01\" (UID: \"a61930df-09b7-4635-a89f-71207b2f4e01\") " Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.978862 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gkkk\" (UniqueName: \"kubernetes.io/projected/1b4b3b88-fd86-414c-a709-6ebb939c03b7-kube-api-access-4gkkk\") pod \"glance-default-external-api-0\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.980093 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.981581 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b4b3b88-fd86-414c-a709-6ebb939c03b7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.981782 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\") pod \"glance-default-external-api-0\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.981926 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.982305 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\") pod \"glance-default-internal-api-0\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.982694 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b4b3b88-fd86-414c-a709-6ebb939c03b7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.982867 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-logs\") pod \"glance-default-internal-api-0\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.982962 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b4b3b88-fd86-414c-a709-6ebb939c03b7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.983060 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.983159 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b4b3b88-fd86-414c-a709-6ebb939c03b7-config-data\") pod \"glance-default-external-api-0\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.983293 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b4b3b88-fd86-414c-a709-6ebb939c03b7-logs\") pod \"glance-default-external-api-0\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.983418 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b4b3b88-fd86-414c-a709-6ebb939c03b7-scripts\") pod \"glance-default-external-api-0\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.983535 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfh9x\" (UniqueName: \"kubernetes.io/projected/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-kube-api-access-nfh9x\") pod \"glance-default-internal-api-0\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.983622 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.983742 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.984172 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b4b3b88-fd86-414c-a709-6ebb939c03b7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.985458 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b4b3b88-fd86-414c-a709-6ebb939c03b7-logs\") pod \"glance-default-external-api-0\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.987926 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b4b3b88-fd86-414c-a709-6ebb939c03b7-scripts\") pod \"glance-default-external-api-0\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.988830 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.988858 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\") pod \"glance-default-external-api-0\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/24501a7431bc13ebaff906718e6fc5aea919dc38d675793dfa72a2f1e9cb67ce/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 19 21:20:17 crc kubenswrapper[4886]: I0219 21:20:17.992699 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b4b3b88-fd86-414c-a709-6ebb939c03b7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.009776 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a61930df-09b7-4635-a89f-71207b2f4e01-kube-api-access-ksr9v" (OuterVolumeSpecName: "kube-api-access-ksr9v") pod "a61930df-09b7-4635-a89f-71207b2f4e01" (UID: "a61930df-09b7-4635-a89f-71207b2f4e01"). InnerVolumeSpecName "kube-api-access-ksr9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.011048 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b4b3b88-fd86-414c-a709-6ebb939c03b7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.019288 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b4b3b88-fd86-414c-a709-6ebb939c03b7-config-data\") pod \"glance-default-external-api-0\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.021588 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gkkk\" (UniqueName: \"kubernetes.io/projected/1b4b3b88-fd86-414c-a709-6ebb939c03b7-kube-api-access-4gkkk\") pod \"glance-default-external-api-0\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.087160 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.087459 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfh9x\" (UniqueName: \"kubernetes.io/projected/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-kube-api-access-nfh9x\") pod \"glance-default-internal-api-0\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.087481 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.087495 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.087557 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.087627 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\") pod \"glance-default-internal-api-0\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.087645 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.087690 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-logs\") pod \"glance-default-internal-api-0\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.087746 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksr9v\" (UniqueName: \"kubernetes.io/projected/a61930df-09b7-4635-a89f-71207b2f4e01-kube-api-access-ksr9v\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.088012 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.088028 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-logs\") pod \"glance-default-internal-api-0\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.104847 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.108251 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.109441 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.114792 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.120194 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.121351 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\") pod \"glance-default-internal-api-0\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fa954a4659b21b6248d850d29df2a32c3efe0ed6ad4129bfc4bfafd49a05e255/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.121999 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfh9x\" (UniqueName: \"kubernetes.io/projected/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-kube-api-access-nfh9x\") pod \"glance-default-internal-api-0\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.227357 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\") pod \"glance-default-external-api-0\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.250171 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a61930df-09b7-4635-a89f-71207b2f4e01" (UID: "a61930df-09b7-4635-a89f-71207b2f4e01"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.271223 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-config" (OuterVolumeSpecName: "config") pod "a61930df-09b7-4635-a89f-71207b2f4e01" (UID: "a61930df-09b7-4635-a89f-71207b2f4e01"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.275437 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\") pod \"glance-default-internal-api-0\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.283372 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a61930df-09b7-4635-a89f-71207b2f4e01" (UID: "a61930df-09b7-4635-a89f-71207b2f4e01"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.290608 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a61930df-09b7-4635-a89f-71207b2f4e01" (UID: "a61930df-09b7-4635-a89f-71207b2f4e01"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.291548 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-ovsdbserver-nb\") pod \"a61930df-09b7-4635-a89f-71207b2f4e01\" (UID: \"a61930df-09b7-4635-a89f-71207b2f4e01\") " Feb 19 21:20:18 crc kubenswrapper[4886]: W0219 21:20:18.292041 4886 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/a61930df-09b7-4635-a89f-71207b2f4e01/volumes/kubernetes.io~configmap/ovsdbserver-nb Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.292054 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a61930df-09b7-4635-a89f-71207b2f4e01" (UID: "a61930df-09b7-4635-a89f-71207b2f4e01"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.294627 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.294651 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.294665 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.294674 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.312293 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vsht2" event={"ID":"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624","Type":"ContainerStarted","Data":"7bf14f99bc3c3c78d8c9e1a02501de84920dcec108fe314a903f4cc2789f9904"} Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.315424 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-8gccj" event={"ID":"cf298b43-d3b1-44fd-a07c-5b6d475256b1","Type":"ContainerStarted","Data":"c839a6c1d31df303de84588e896bf81651c5a6f09056db66c36910972605600d"} Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.315569 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-847c4cc679-8gccj" podUID="cf298b43-d3b1-44fd-a07c-5b6d475256b1" containerName="init" containerID="cri-o://1c2f6da76fcbb176efbc2f06009c8282961638c69a69260ed0c8e2e8eb18fe21" gracePeriod=10 Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.320690 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.321230 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" event={"ID":"a61930df-09b7-4635-a89f-71207b2f4e01","Type":"ContainerDied","Data":"a854a0f9d6f4f037efa55baec650aafa563d7bf08e5d57a4c32def73d976c327"} Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.321278 4886 scope.go:117] "RemoveContainer" containerID="8401bd89ec5fc526c53acf5c36b87acf8d24d58843887688a033faa07cf3889d" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.323836 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a61930df-09b7-4635-a89f-71207b2f4e01" (UID: "a61930df-09b7-4635-a89f-71207b2f4e01"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.389438 4886 scope.go:117] "RemoveContainer" containerID="f4c958e2421499efbf0a1cf7d5b85838951fb0fa50160e4ce0095d597d13f388" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.401691 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a61930df-09b7-4635-a89f-71207b2f4e01-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.465635 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.465774 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.530195 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rlz62"] Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.555452 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-x27rt"] Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.574187 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-jjrqx"] Feb 19 21:20:18 crc kubenswrapper[4886]: W0219 21:20:18.589585 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f14cfdd_608a_42ab_9195_b9773729d874.slice/crio-79b2991deb184c924c60b10408337406ff6b37dd7506902b1facbb9a57e18e4b WatchSource:0}: Error finding container 79b2991deb184c924c60b10408337406ff6b37dd7506902b1facbb9a57e18e4b: Status 404 returned error can't find the container with id 79b2991deb184c924c60b10408337406ff6b37dd7506902b1facbb9a57e18e4b Feb 19 21:20:18 crc kubenswrapper[4886]: W0219 21:20:18.594073 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod336b4fc8_890f_4ace_baa3_587ebc3b27db.slice/crio-ae2357f768be389806e85e34407260671d23c579a4d8d370340e40fe7b6d998d WatchSource:0}: Error finding container ae2357f768be389806e85e34407260671d23c579a4d8d370340e40fe7b6d998d: Status 404 returned error can't find the container with id ae2357f768be389806e85e34407260671d23c579a4d8d370340e40fe7b6d998d Feb 19 21:20:18 crc kubenswrapper[4886]: W0219 21:20:18.934506 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ccc4c92_a3b0_47ea_9620_830916d087ab.slice/crio-7e89a158cbc86696cf52aa20cde2ed5a1e6bfbd721d45c4c6ad555404e858a10 WatchSource:0}: Error finding container 7e89a158cbc86696cf52aa20cde2ed5a1e6bfbd721d45c4c6ad555404e858a10: Status 404 returned error can't find the container with id 7e89a158cbc86696cf52aa20cde2ed5a1e6bfbd721d45c4c6ad555404e858a10 Feb 19 21:20:18 crc kubenswrapper[4886]: I0219 21:20:18.938572 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-kts74"] Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.089954 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.146027 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-4rq5v"] Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.186837 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xmr6n"] Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.223603 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-8gccj" Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.336917 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-dns-svc\") pod \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\" (UID: \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\") " Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.337228 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-ovsdbserver-nb\") pod \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\" (UID: \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\") " Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.337436 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-config\") pod \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\" (UID: \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\") " Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.337641 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpgm6\" (UniqueName: \"kubernetes.io/projected/cf298b43-d3b1-44fd-a07c-5b6d475256b1-kube-api-access-jpgm6\") pod \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\" (UID: \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\") " Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.337745 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-ovsdbserver-sb\") pod \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\" (UID: \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\") " Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.337856 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-dns-swift-storage-0\") pod \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\" (UID: \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\") " Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.381615 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf298b43-d3b1-44fd-a07c-5b6d475256b1-kube-api-access-jpgm6" (OuterVolumeSpecName: "kube-api-access-jpgm6") pod "cf298b43-d3b1-44fd-a07c-5b6d475256b1" (UID: "cf298b43-d3b1-44fd-a07c-5b6d475256b1"). InnerVolumeSpecName "kube-api-access-jpgm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.382664 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cf298b43-d3b1-44fd-a07c-5b6d475256b1" (UID: "cf298b43-d3b1-44fd-a07c-5b6d475256b1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.395236 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.399112 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4rq5v" event={"ID":"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7","Type":"ContainerStarted","Data":"bf59df88c26bb8617996c22bfccd1d2dd6b6bc98d13f5b1882248dfbd29860e1"} Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.407612 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38d7aae-1881-4142-a122-f3bdbaf8fbcb","Type":"ContainerStarted","Data":"cd7a39079d4f724acce531d752341f8807d292e759d32e90eb5c35141ce209a7"} Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.425741 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cf298b43-d3b1-44fd-a07c-5b6d475256b1" (UID: "cf298b43-d3b1-44fd-a07c-5b6d475256b1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.429380 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" event={"ID":"9ccc4c92-a3b0-47ea-9620-830916d087ab","Type":"ContainerStarted","Data":"7e89a158cbc86696cf52aa20cde2ed5a1e6bfbd721d45c4c6ad555404e858a10"} Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.432377 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.438540 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf298b43-d3b1-44fd-a07c-5b6d475256b1" (UID: "cf298b43-d3b1-44fd-a07c-5b6d475256b1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.443670 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-config" (OuterVolumeSpecName: "config") pod "cf298b43-d3b1-44fd-a07c-5b6d475256b1" (UID: "cf298b43-d3b1-44fd-a07c-5b6d475256b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.445248 4886 generic.go:334] "Generic (PLEG): container finished" podID="cf298b43-d3b1-44fd-a07c-5b6d475256b1" containerID="1c2f6da76fcbb176efbc2f06009c8282961638c69a69260ed0c8e2e8eb18fe21" exitCode=0 Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.445333 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-8gccj" event={"ID":"cf298b43-d3b1-44fd-a07c-5b6d475256b1","Type":"ContainerDied","Data":"1c2f6da76fcbb176efbc2f06009c8282961638c69a69260ed0c8e2e8eb18fe21"} Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.445360 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-8gccj" event={"ID":"cf298b43-d3b1-44fd-a07c-5b6d475256b1","Type":"ContainerDied","Data":"c839a6c1d31df303de84588e896bf81651c5a6f09056db66c36910972605600d"} Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.445376 4886 scope.go:117] "RemoveContainer" containerID="1c2f6da76fcbb176efbc2f06009c8282961638c69a69260ed0c8e2e8eb18fe21" Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.445472 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-8gccj" Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.452791 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xmr6n" event={"ID":"83d1e69b-5951-43d6-a54b-c73956bb3356","Type":"ContainerStarted","Data":"c396d20370fef6941299ce014881453eb1442244ff203ec95ed6751621e3e460"} Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.453853 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-config\") pod \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\" (UID: \"cf298b43-d3b1-44fd-a07c-5b6d475256b1\") " Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.454405 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpgm6\" (UniqueName: \"kubernetes.io/projected/cf298b43-d3b1-44fd-a07c-5b6d475256b1-kube-api-access-jpgm6\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.454420 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.454434 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.454444 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:19 crc kubenswrapper[4886]: W0219 21:20:19.454516 4886 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/cf298b43-d3b1-44fd-a07c-5b6d475256b1/volumes/kubernetes.io~configmap/config Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.454528 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-config" (OuterVolumeSpecName: "config") pod "cf298b43-d3b1-44fd-a07c-5b6d475256b1" (UID: "cf298b43-d3b1-44fd-a07c-5b6d475256b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.471809 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cf298b43-d3b1-44fd-a07c-5b6d475256b1" (UID: "cf298b43-d3b1-44fd-a07c-5b6d475256b1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.477213 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jjrqx" event={"ID":"9f14cfdd-608a-42ab-9195-b9773729d874","Type":"ContainerStarted","Data":"ce48641e412b2ea34f23c4d2495d0621fa6f8ba7d771b867cbd84b8c6785402e"} Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.477290 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jjrqx" event={"ID":"9f14cfdd-608a-42ab-9195-b9773729d874","Type":"ContainerStarted","Data":"79b2991deb184c924c60b10408337406ff6b37dd7506902b1facbb9a57e18e4b"} Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.504700 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vsht2" event={"ID":"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624","Type":"ContainerStarted","Data":"dc06bce21f9af90aad092f70022ebda4aafc628f0f98bf1b203504b86416f931"} Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.519707 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.524089 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-x27rt" event={"ID":"336b4fc8-890f-4ace-baa3-587ebc3b27db","Type":"ContainerStarted","Data":"ae2357f768be389806e85e34407260671d23c579a4d8d370340e40fe7b6d998d"} Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.526458 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rlz62" event={"ID":"956c70ec-60b5-4909-b686-66971581b168","Type":"ContainerStarted","Data":"7544c266f1e7db7ad0aee365345495aba93e569fa49611ea88e9f823b2d49b27"} Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.542025 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-jjrqx" podStartSLOduration=3.542007462 podStartE2EDuration="3.542007462s" podCreationTimestamp="2026-02-19 21:20:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:20:19.496311374 +0000 UTC m=+1250.124154424" watchObservedRunningTime="2026-02-19 21:20:19.542007462 +0000 UTC m=+1250.169850512" Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.556842 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.556873 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf298b43-d3b1-44fd-a07c-5b6d475256b1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.566985 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vsht2" podStartSLOduration=3.566970329 podStartE2EDuration="3.566970329s" podCreationTimestamp="2026-02-19 21:20:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:20:19.55407698 +0000 UTC m=+1250.181920040" watchObservedRunningTime="2026-02-19 21:20:19.566970329 +0000 UTC m=+1250.194813379" Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.661533 4886 scope.go:117] "RemoveContainer" containerID="1c2f6da76fcbb176efbc2f06009c8282961638c69a69260ed0c8e2e8eb18fe21" Feb 19 21:20:19 crc kubenswrapper[4886]: E0219 21:20:19.678415 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c2f6da76fcbb176efbc2f06009c8282961638c69a69260ed0c8e2e8eb18fe21\": container with ID starting with 1c2f6da76fcbb176efbc2f06009c8282961638c69a69260ed0c8e2e8eb18fe21 not found: ID does not exist" containerID="1c2f6da76fcbb176efbc2f06009c8282961638c69a69260ed0c8e2e8eb18fe21" Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.678483 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c2f6da76fcbb176efbc2f06009c8282961638c69a69260ed0c8e2e8eb18fe21"} err="failed to get container status \"1c2f6da76fcbb176efbc2f06009c8282961638c69a69260ed0c8e2e8eb18fe21\": rpc error: code = NotFound desc = could not find container \"1c2f6da76fcbb176efbc2f06009c8282961638c69a69260ed0c8e2e8eb18fe21\": container with ID starting with 1c2f6da76fcbb176efbc2f06009c8282961638c69a69260ed0c8e2e8eb18fe21 not found: ID does not exist" Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.693342 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.745198 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.958750 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-8gccj"] Feb 19 21:20:19 crc kubenswrapper[4886]: I0219 21:20:19.968159 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-8gccj"] Feb 19 21:20:20 crc kubenswrapper[4886]: I0219 21:20:20.581587 4886 generic.go:334] "Generic (PLEG): container finished" podID="9ccc4c92-a3b0-47ea-9620-830916d087ab" containerID="70e67ead190e42e0102382fc128f4301f3c7c581f909c95b2138b1490779e552" exitCode=0 Feb 19 21:20:20 crc kubenswrapper[4886]: I0219 21:20:20.582055 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" event={"ID":"9ccc4c92-a3b0-47ea-9620-830916d087ab","Type":"ContainerDied","Data":"70e67ead190e42e0102382fc128f4301f3c7c581f909c95b2138b1490779e552"} Feb 19 21:20:20 crc kubenswrapper[4886]: I0219 21:20:20.621734 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf298b43-d3b1-44fd-a07c-5b6d475256b1" path="/var/lib/kubelet/pods/cf298b43-d3b1-44fd-a07c-5b6d475256b1/volumes" Feb 19 21:20:20 crc kubenswrapper[4886]: I0219 21:20:20.640154 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5446e6a9-a091-4fa1-b7fc-9a5c0282c390","Type":"ContainerStarted","Data":"02b77410e34bcb3198bdb4741a7115644fa461e771fac022d18b8485ca995493"} Feb 19 21:20:20 crc kubenswrapper[4886]: I0219 21:20:20.649708 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1b4b3b88-fd86-414c-a709-6ebb939c03b7","Type":"ContainerStarted","Data":"6053eb67cd90acbd30b2f7d9a369bec333b248bf51f2575ab34e4e842ca8e170"} Feb 19 21:20:21 crc kubenswrapper[4886]: I0219 21:20:21.663660 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1b4b3b88-fd86-414c-a709-6ebb939c03b7","Type":"ContainerStarted","Data":"3d9a6d9fd438bffb9555d889339ca760fb9aea9bc8d865a7d47c31c6b0a442d5"} Feb 19 21:20:21 crc kubenswrapper[4886]: I0219 21:20:21.670386 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" event={"ID":"9ccc4c92-a3b0-47ea-9620-830916d087ab","Type":"ContainerStarted","Data":"73b93dbc355b54e7626fcb45559e72b99a024f171194d737f734b86fd4590916"} Feb 19 21:20:21 crc kubenswrapper[4886]: I0219 21:20:21.671625 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" Feb 19 21:20:21 crc kubenswrapper[4886]: I0219 21:20:21.673787 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5446e6a9-a091-4fa1-b7fc-9a5c0282c390","Type":"ContainerStarted","Data":"7b484b4b36625851f3c56a3b40ae3e90e01992563058b90e1177f46bfc45602c"} Feb 19 21:20:21 crc kubenswrapper[4886]: I0219 21:20:21.715326 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" podStartSLOduration=4.71528008 podStartE2EDuration="4.71528008s" podCreationTimestamp="2026-02-19 21:20:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:20:21.689395491 +0000 UTC m=+1252.317238541" watchObservedRunningTime="2026-02-19 21:20:21.71528008 +0000 UTC m=+1252.343123130" Feb 19 21:20:22 crc kubenswrapper[4886]: I0219 21:20:22.697278 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5446e6a9-a091-4fa1-b7fc-9a5c0282c390","Type":"ContainerStarted","Data":"572c3f43da6f370faad3acb7c2d88115362f227242a124580889f9d170a14305"} Feb 19 21:20:22 crc kubenswrapper[4886]: I0219 21:20:22.697393 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5446e6a9-a091-4fa1-b7fc-9a5c0282c390" containerName="glance-log" containerID="cri-o://7b484b4b36625851f3c56a3b40ae3e90e01992563058b90e1177f46bfc45602c" gracePeriod=30 Feb 19 21:20:22 crc kubenswrapper[4886]: I0219 21:20:22.697670 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5446e6a9-a091-4fa1-b7fc-9a5c0282c390" containerName="glance-httpd" containerID="cri-o://572c3f43da6f370faad3acb7c2d88115362f227242a124580889f9d170a14305" gracePeriod=30 Feb 19 21:20:22 crc kubenswrapper[4886]: I0219 21:20:22.711296 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1b4b3b88-fd86-414c-a709-6ebb939c03b7","Type":"ContainerStarted","Data":"878b6a70d5f8a0b8041f123982916482531a7c40cb31906ed7c75fde948a2212"} Feb 19 21:20:22 crc kubenswrapper[4886]: I0219 21:20:22.711414 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1b4b3b88-fd86-414c-a709-6ebb939c03b7" containerName="glance-log" containerID="cri-o://3d9a6d9fd438bffb9555d889339ca760fb9aea9bc8d865a7d47c31c6b0a442d5" gracePeriod=30 Feb 19 21:20:22 crc kubenswrapper[4886]: I0219 21:20:22.711518 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1b4b3b88-fd86-414c-a709-6ebb939c03b7" containerName="glance-httpd" containerID="cri-o://878b6a70d5f8a0b8041f123982916482531a7c40cb31906ed7c75fde948a2212" gracePeriod=30 Feb 19 21:20:22 crc kubenswrapper[4886]: I0219 21:20:22.735218 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.735196762 podStartE2EDuration="6.735196762s" podCreationTimestamp="2026-02-19 21:20:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:20:22.719768171 +0000 UTC m=+1253.347611221" watchObservedRunningTime="2026-02-19 21:20:22.735196762 +0000 UTC m=+1253.363039812" Feb 19 21:20:22 crc kubenswrapper[4886]: I0219 21:20:22.756779 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.756754394 podStartE2EDuration="6.756754394s" podCreationTimestamp="2026-02-19 21:20:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:20:22.745342423 +0000 UTC m=+1253.373185473" watchObservedRunningTime="2026-02-19 21:20:22.756754394 +0000 UTC m=+1253.384597444" Feb 19 21:20:23 crc kubenswrapper[4886]: I0219 21:20:23.726047 4886 generic.go:334] "Generic (PLEG): container finished" podID="5446e6a9-a091-4fa1-b7fc-9a5c0282c390" containerID="572c3f43da6f370faad3acb7c2d88115362f227242a124580889f9d170a14305" exitCode=0 Feb 19 21:20:23 crc kubenswrapper[4886]: I0219 21:20:23.726325 4886 generic.go:334] "Generic (PLEG): container finished" podID="5446e6a9-a091-4fa1-b7fc-9a5c0282c390" containerID="7b484b4b36625851f3c56a3b40ae3e90e01992563058b90e1177f46bfc45602c" exitCode=143 Feb 19 21:20:23 crc kubenswrapper[4886]: I0219 21:20:23.726085 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5446e6a9-a091-4fa1-b7fc-9a5c0282c390","Type":"ContainerDied","Data":"572c3f43da6f370faad3acb7c2d88115362f227242a124580889f9d170a14305"} Feb 19 21:20:23 crc kubenswrapper[4886]: I0219 21:20:23.726392 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5446e6a9-a091-4fa1-b7fc-9a5c0282c390","Type":"ContainerDied","Data":"7b484b4b36625851f3c56a3b40ae3e90e01992563058b90e1177f46bfc45602c"} Feb 19 21:20:23 crc kubenswrapper[4886]: I0219 21:20:23.728607 4886 generic.go:334] "Generic (PLEG): container finished" podID="1b4b3b88-fd86-414c-a709-6ebb939c03b7" containerID="878b6a70d5f8a0b8041f123982916482531a7c40cb31906ed7c75fde948a2212" exitCode=0 Feb 19 21:20:23 crc kubenswrapper[4886]: I0219 21:20:23.728629 4886 generic.go:334] "Generic (PLEG): container finished" podID="1b4b3b88-fd86-414c-a709-6ebb939c03b7" containerID="3d9a6d9fd438bffb9555d889339ca760fb9aea9bc8d865a7d47c31c6b0a442d5" exitCode=143 Feb 19 21:20:23 crc kubenswrapper[4886]: I0219 21:20:23.729475 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1b4b3b88-fd86-414c-a709-6ebb939c03b7","Type":"ContainerDied","Data":"878b6a70d5f8a0b8041f123982916482531a7c40cb31906ed7c75fde948a2212"} Feb 19 21:20:23 crc kubenswrapper[4886]: I0219 21:20:23.729503 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1b4b3b88-fd86-414c-a709-6ebb939c03b7","Type":"ContainerDied","Data":"3d9a6d9fd438bffb9555d889339ca760fb9aea9bc8d865a7d47c31c6b0a442d5"} Feb 19 21:20:24 crc kubenswrapper[4886]: I0219 21:20:24.742864 4886 generic.go:334] "Generic (PLEG): container finished" podID="b9cb3a9c-8a46-4db4-9b02-f8b4f6493624" containerID="dc06bce21f9af90aad092f70022ebda4aafc628f0f98bf1b203504b86416f931" exitCode=0 Feb 19 21:20:24 crc kubenswrapper[4886]: I0219 21:20:24.743404 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vsht2" event={"ID":"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624","Type":"ContainerDied","Data":"dc06bce21f9af90aad092f70022ebda4aafc628f0f98bf1b203504b86416f931"} Feb 19 21:20:27 crc kubenswrapper[4886]: I0219 21:20:27.810405 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" Feb 19 21:20:27 crc kubenswrapper[4886]: I0219 21:20:27.880988 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-29fmc"] Feb 19 21:20:27 crc kubenswrapper[4886]: I0219 21:20:27.881748 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-29fmc" podUID="f2bb1ddd-f03a-4236-b0ea-b35e551916e5" containerName="dnsmasq-dns" containerID="cri-o://aa77ba5b5db818dabf56bfe61c4ddee323d1a77018c113fd78c042c9786b1786" gracePeriod=10 Feb 19 21:20:28 crc kubenswrapper[4886]: I0219 21:20:28.802069 4886 generic.go:334] "Generic (PLEG): container finished" podID="f2bb1ddd-f03a-4236-b0ea-b35e551916e5" containerID="aa77ba5b5db818dabf56bfe61c4ddee323d1a77018c113fd78c042c9786b1786" exitCode=0 Feb 19 21:20:28 crc kubenswrapper[4886]: I0219 21:20:28.802180 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-29fmc" event={"ID":"f2bb1ddd-f03a-4236-b0ea-b35e551916e5","Type":"ContainerDied","Data":"aa77ba5b5db818dabf56bfe61c4ddee323d1a77018c113fd78c042c9786b1786"} Feb 19 21:20:30 crc kubenswrapper[4886]: I0219 21:20:30.345513 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-29fmc" podUID="f2bb1ddd-f03a-4236-b0ea-b35e551916e5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.166:5353: connect: connection refused" Feb 19 21:20:35 crc kubenswrapper[4886]: I0219 21:20:35.345077 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-29fmc" podUID="f2bb1ddd-f03a-4236-b0ea-b35e551916e5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.166:5353: connect: connection refused" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.254618 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.260949 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vsht2" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.268941 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.391041 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b4b3b88-fd86-414c-a709-6ebb939c03b7-logs\") pod \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.391438 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b4b3b88-fd86-414c-a709-6ebb939c03b7-httpd-run\") pod \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.391462 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-httpd-run\") pod \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.391498 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b4b3b88-fd86-414c-a709-6ebb939c03b7-combined-ca-bundle\") pod \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.391541 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gkkk\" (UniqueName: \"kubernetes.io/projected/1b4b3b88-fd86-414c-a709-6ebb939c03b7-kube-api-access-4gkkk\") pod \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.391630 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-logs\") pod \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.391703 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b4b3b88-fd86-414c-a709-6ebb939c03b7-config-data\") pod \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.391927 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b4b3b88-fd86-414c-a709-6ebb939c03b7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1b4b3b88-fd86-414c-a709-6ebb939c03b7" (UID: "1b4b3b88-fd86-414c-a709-6ebb939c03b7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.391952 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5446e6a9-a091-4fa1-b7fc-9a5c0282c390" (UID: "5446e6a9-a091-4fa1-b7fc-9a5c0282c390"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.392066 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b4b3b88-fd86-414c-a709-6ebb939c03b7-logs" (OuterVolumeSpecName: "logs") pod "1b4b3b88-fd86-414c-a709-6ebb939c03b7" (UID: "1b4b3b88-fd86-414c-a709-6ebb939c03b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.392342 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-logs" (OuterVolumeSpecName: "logs") pod "5446e6a9-a091-4fa1-b7fc-9a5c0282c390" (UID: "5446e6a9-a091-4fa1-b7fc-9a5c0282c390"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.392587 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\") pod \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.392670 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc8lk\" (UniqueName: \"kubernetes.io/projected/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-kube-api-access-hc8lk\") pod \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\" (UID: \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\") " Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.392700 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfh9x\" (UniqueName: \"kubernetes.io/projected/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-kube-api-access-nfh9x\") pod \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.392785 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\") pod \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.392818 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-credential-keys\") pod \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\" (UID: \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\") " Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.392893 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-combined-ca-bundle\") pod \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\" (UID: \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\") " Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.392940 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-internal-tls-certs\") pod \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.393006 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-combined-ca-bundle\") pod \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.393064 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b4b3b88-fd86-414c-a709-6ebb939c03b7-public-tls-certs\") pod \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.393154 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b4b3b88-fd86-414c-a709-6ebb939c03b7-scripts\") pod \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\" (UID: \"1b4b3b88-fd86-414c-a709-6ebb939c03b7\") " Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.393186 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-scripts\") pod \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.393243 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-fernet-keys\") pod \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\" (UID: \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\") " Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.393320 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-config-data\") pod \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\" (UID: \"5446e6a9-a091-4fa1-b7fc-9a5c0282c390\") " Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.393376 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-config-data\") pod \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\" (UID: \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\") " Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.393450 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-scripts\") pod \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\" (UID: \"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624\") " Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.395161 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b4b3b88-fd86-414c-a709-6ebb939c03b7-logs\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.395183 4886 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.395197 4886 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b4b3b88-fd86-414c-a709-6ebb939c03b7-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.395207 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-logs\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.401160 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b4b3b88-fd86-414c-a709-6ebb939c03b7-kube-api-access-4gkkk" (OuterVolumeSpecName: "kube-api-access-4gkkk") pod "1b4b3b88-fd86-414c-a709-6ebb939c03b7" (UID: "1b4b3b88-fd86-414c-a709-6ebb939c03b7"). InnerVolumeSpecName "kube-api-access-4gkkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.403884 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-scripts" (OuterVolumeSpecName: "scripts") pod "b9cb3a9c-8a46-4db4-9b02-f8b4f6493624" (UID: "b9cb3a9c-8a46-4db4-9b02-f8b4f6493624"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.403890 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-scripts" (OuterVolumeSpecName: "scripts") pod "5446e6a9-a091-4fa1-b7fc-9a5c0282c390" (UID: "5446e6a9-a091-4fa1-b7fc-9a5c0282c390"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.406935 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-kube-api-access-hc8lk" (OuterVolumeSpecName: "kube-api-access-hc8lk") pod "b9cb3a9c-8a46-4db4-9b02-f8b4f6493624" (UID: "b9cb3a9c-8a46-4db4-9b02-f8b4f6493624"). InnerVolumeSpecName "kube-api-access-hc8lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.427367 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b9cb3a9c-8a46-4db4-9b02-f8b4f6493624" (UID: "b9cb3a9c-8a46-4db4-9b02-f8b4f6493624"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.446895 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b4b3b88-fd86-414c-a709-6ebb939c03b7-scripts" (OuterVolumeSpecName: "scripts") pod "1b4b3b88-fd86-414c-a709-6ebb939c03b7" (UID: "1b4b3b88-fd86-414c-a709-6ebb939c03b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.446904 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b9cb3a9c-8a46-4db4-9b02-f8b4f6493624" (UID: "b9cb3a9c-8a46-4db4-9b02-f8b4f6493624"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.461246 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-kube-api-access-nfh9x" (OuterVolumeSpecName: "kube-api-access-nfh9x") pod "5446e6a9-a091-4fa1-b7fc-9a5c0282c390" (UID: "5446e6a9-a091-4fa1-b7fc-9a5c0282c390"). InnerVolumeSpecName "kube-api-access-nfh9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.465704 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-59d9fab8-7cd6-4599-9419-be74d0657eef" (OuterVolumeSpecName: "glance") pod "5446e6a9-a091-4fa1-b7fc-9a5c0282c390" (UID: "5446e6a9-a091-4fa1-b7fc-9a5c0282c390"). InnerVolumeSpecName "pvc-59d9fab8-7cd6-4599-9419-be74d0657eef". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.498508 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.498535 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gkkk\" (UniqueName: \"kubernetes.io/projected/1b4b3b88-fd86-414c-a709-6ebb939c03b7-kube-api-access-4gkkk\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.498563 4886 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\") on node \"crc\" " Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.498575 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc8lk\" (UniqueName: \"kubernetes.io/projected/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-kube-api-access-hc8lk\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.498584 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfh9x\" (UniqueName: \"kubernetes.io/projected/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-kube-api-access-nfh9x\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.498592 4886 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.498600 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b4b3b88-fd86-414c-a709-6ebb939c03b7-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.498610 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.498620 4886 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.507283 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1" (OuterVolumeSpecName: "glance") pod "1b4b3b88-fd86-414c-a709-6ebb939c03b7" (UID: "1b4b3b88-fd86-414c-a709-6ebb939c03b7"). InnerVolumeSpecName "pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.507817 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5446e6a9-a091-4fa1-b7fc-9a5c0282c390" (UID: "5446e6a9-a091-4fa1-b7fc-9a5c0282c390"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.539726 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b4b3b88-fd86-414c-a709-6ebb939c03b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b4b3b88-fd86-414c-a709-6ebb939c03b7" (UID: "1b4b3b88-fd86-414c-a709-6ebb939c03b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.546913 4886 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.547075 4886 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-59d9fab8-7cd6-4599-9419-be74d0657eef" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-59d9fab8-7cd6-4599-9419-be74d0657eef") on node "crc" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.569250 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9cb3a9c-8a46-4db4-9b02-f8b4f6493624" (UID: "b9cb3a9c-8a46-4db4-9b02-f8b4f6493624"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.578989 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-config-data" (OuterVolumeSpecName: "config-data") pod "b9cb3a9c-8a46-4db4-9b02-f8b4f6493624" (UID: "b9cb3a9c-8a46-4db4-9b02-f8b4f6493624"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.587901 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b4b3b88-fd86-414c-a709-6ebb939c03b7-config-data" (OuterVolumeSpecName: "config-data") pod "1b4b3b88-fd86-414c-a709-6ebb939c03b7" (UID: "1b4b3b88-fd86-414c-a709-6ebb939c03b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.597628 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-config-data" (OuterVolumeSpecName: "config-data") pod "5446e6a9-a091-4fa1-b7fc-9a5c0282c390" (UID: "5446e6a9-a091-4fa1-b7fc-9a5c0282c390"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.599991 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5446e6a9-a091-4fa1-b7fc-9a5c0282c390" (UID: "5446e6a9-a091-4fa1-b7fc-9a5c0282c390"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.600389 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.600459 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b4b3b88-fd86-414c-a709-6ebb939c03b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.600471 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b4b3b88-fd86-414c-a709-6ebb939c03b7-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.600481 4886 reconciler_common.go:293] "Volume detached for volume \"pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.600510 4886 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\") on node \"crc\" " Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.600523 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.600531 4886 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.600542 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.600553 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5446e6a9-a091-4fa1-b7fc-9a5c0282c390-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.604857 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b4b3b88-fd86-414c-a709-6ebb939c03b7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1b4b3b88-fd86-414c-a709-6ebb939c03b7" (UID: "1b4b3b88-fd86-414c-a709-6ebb939c03b7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.644136 4886 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.644358 4886 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1") on node "crc" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.702917 4886 reconciler_common.go:293] "Volume detached for volume \"pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.702953 4886 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b4b3b88-fd86-414c-a709-6ebb939c03b7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.933119 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1b4b3b88-fd86-414c-a709-6ebb939c03b7","Type":"ContainerDied","Data":"6053eb67cd90acbd30b2f7d9a369bec333b248bf51f2575ab34e4e842ca8e170"} Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.933616 4886 scope.go:117] "RemoveContainer" containerID="878b6a70d5f8a0b8041f123982916482531a7c40cb31906ed7c75fde948a2212" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.933190 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.938705 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vsht2" event={"ID":"b9cb3a9c-8a46-4db4-9b02-f8b4f6493624","Type":"ContainerDied","Data":"7bf14f99bc3c3c78d8c9e1a02501de84920dcec108fe314a903f4cc2789f9904"} Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.938760 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bf14f99bc3c3c78d8c9e1a02501de84920dcec108fe314a903f4cc2789f9904" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.938774 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vsht2" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.944093 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5446e6a9-a091-4fa1-b7fc-9a5c0282c390","Type":"ContainerDied","Data":"02b77410e34bcb3198bdb4741a7115644fa461e771fac022d18b8485ca995493"} Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.944179 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.968358 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 21:20:36 crc kubenswrapper[4886]: I0219 21:20:36.982355 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.004861 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.022202 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.035872 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 21:20:37 crc kubenswrapper[4886]: E0219 21:20:37.036408 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b4b3b88-fd86-414c-a709-6ebb939c03b7" containerName="glance-log" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.036434 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b4b3b88-fd86-414c-a709-6ebb939c03b7" containerName="glance-log" Feb 19 21:20:37 crc kubenswrapper[4886]: E0219 21:20:37.036447 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9cb3a9c-8a46-4db4-9b02-f8b4f6493624" containerName="keystone-bootstrap" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.036455 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9cb3a9c-8a46-4db4-9b02-f8b4f6493624" containerName="keystone-bootstrap" Feb 19 21:20:37 crc kubenswrapper[4886]: E0219 21:20:37.036476 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5446e6a9-a091-4fa1-b7fc-9a5c0282c390" containerName="glance-log" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.036484 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="5446e6a9-a091-4fa1-b7fc-9a5c0282c390" containerName="glance-log" Feb 19 21:20:37 crc kubenswrapper[4886]: E0219 21:20:37.036503 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5446e6a9-a091-4fa1-b7fc-9a5c0282c390" containerName="glance-httpd" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.036512 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="5446e6a9-a091-4fa1-b7fc-9a5c0282c390" containerName="glance-httpd" Feb 19 21:20:37 crc kubenswrapper[4886]: E0219 21:20:37.036525 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b4b3b88-fd86-414c-a709-6ebb939c03b7" containerName="glance-httpd" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.036532 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b4b3b88-fd86-414c-a709-6ebb939c03b7" containerName="glance-httpd" Feb 19 21:20:37 crc kubenswrapper[4886]: E0219 21:20:37.036575 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf298b43-d3b1-44fd-a07c-5b6d475256b1" containerName="init" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.036621 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf298b43-d3b1-44fd-a07c-5b6d475256b1" containerName="init" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.036855 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="5446e6a9-a091-4fa1-b7fc-9a5c0282c390" containerName="glance-httpd" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.036870 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="5446e6a9-a091-4fa1-b7fc-9a5c0282c390" containerName="glance-log" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.036881 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b4b3b88-fd86-414c-a709-6ebb939c03b7" containerName="glance-httpd" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.036899 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf298b43-d3b1-44fd-a07c-5b6d475256b1" containerName="init" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.036922 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9cb3a9c-8a46-4db4-9b02-f8b4f6493624" containerName="keystone-bootstrap" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.036931 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b4b3b88-fd86-414c-a709-6ebb939c03b7" containerName="glance-log" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.044397 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.048306 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.048513 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jvnwk" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.049632 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.049713 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.060816 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.063145 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.074691 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.074822 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.074865 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.089872 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.110606 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8lqm\" (UniqueName: \"kubernetes.io/projected/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-kube-api-access-f8lqm\") pod \"glance-default-internal-api-0\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.110691 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8pfm\" (UniqueName: \"kubernetes.io/projected/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-kube-api-access-q8pfm\") pod \"glance-default-external-api-0\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.110771 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.110857 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.110913 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.111006 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-scripts\") pod \"glance-default-external-api-0\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.111064 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.111160 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-logs\") pod \"glance-default-internal-api-0\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.111195 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\") pod \"glance-default-external-api-0\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.111285 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\") pod \"glance-default-internal-api-0\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.111325 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-config-data\") pod \"glance-default-external-api-0\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.111367 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.111410 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.111507 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-logs\") pod \"glance-default-external-api-0\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.111556 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.111595 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.213903 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.213956 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.213981 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.214000 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-scripts\") pod \"glance-default-external-api-0\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.214024 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.214079 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-logs\") pod \"glance-default-internal-api-0\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.214103 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\") pod \"glance-default-external-api-0\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.214154 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\") pod \"glance-default-internal-api-0\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.214178 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-config-data\") pod \"glance-default-external-api-0\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.214202 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.214227 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.214368 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-logs\") pod \"glance-default-external-api-0\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.214413 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.214438 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.214463 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8lqm\" (UniqueName: \"kubernetes.io/projected/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-kube-api-access-f8lqm\") pod \"glance-default-internal-api-0\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.214484 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8pfm\" (UniqueName: \"kubernetes.io/projected/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-kube-api-access-q8pfm\") pod \"glance-default-external-api-0\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.214574 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.214906 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-logs\") pod \"glance-default-external-api-0\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.215145 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-logs\") pod \"glance-default-internal-api-0\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.216689 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.219115 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.219138 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.219158 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\") pod \"glance-default-external-api-0\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/24501a7431bc13ebaff906718e6fc5aea919dc38d675793dfa72a2f1e9cb67ce/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.219180 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\") pod \"glance-default-internal-api-0\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fa954a4659b21b6248d850d29df2a32c3efe0ed6ad4129bfc4bfafd49a05e255/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.219697 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.221186 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.221756 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.224668 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-config-data\") pod \"glance-default-external-api-0\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.228994 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-scripts\") pod \"glance-default-external-api-0\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.230354 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.230513 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.232719 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.234055 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8pfm\" (UniqueName: \"kubernetes.io/projected/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-kube-api-access-q8pfm\") pod \"glance-default-external-api-0\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.235527 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8lqm\" (UniqueName: \"kubernetes.io/projected/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-kube-api-access-f8lqm\") pod \"glance-default-internal-api-0\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.266724 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\") pod \"glance-default-external-api-0\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.285430 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\") pod \"glance-default-internal-api-0\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.375447 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.435431 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.447992 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vsht2"] Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.456474 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vsht2"] Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.542563 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-8fg8c"] Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.544324 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8fg8c" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.546543 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.546554 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gw5vn" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.546554 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.548087 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.549230 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.552169 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8fg8c"] Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.724102 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-combined-ca-bundle\") pod \"keystone-bootstrap-8fg8c\" (UID: \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\") " pod="openstack/keystone-bootstrap-8fg8c" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.725606 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-scripts\") pod \"keystone-bootstrap-8fg8c\" (UID: \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\") " pod="openstack/keystone-bootstrap-8fg8c" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.725705 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-fernet-keys\") pod \"keystone-bootstrap-8fg8c\" (UID: \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\") " pod="openstack/keystone-bootstrap-8fg8c" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.725884 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-config-data\") pod \"keystone-bootstrap-8fg8c\" (UID: \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\") " pod="openstack/keystone-bootstrap-8fg8c" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.725995 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-credential-keys\") pod \"keystone-bootstrap-8fg8c\" (UID: \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\") " pod="openstack/keystone-bootstrap-8fg8c" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.726058 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5nd5\" (UniqueName: \"kubernetes.io/projected/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-kube-api-access-m5nd5\") pod \"keystone-bootstrap-8fg8c\" (UID: \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\") " pod="openstack/keystone-bootstrap-8fg8c" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.828283 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-combined-ca-bundle\") pod \"keystone-bootstrap-8fg8c\" (UID: \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\") " pod="openstack/keystone-bootstrap-8fg8c" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.828378 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-scripts\") pod \"keystone-bootstrap-8fg8c\" (UID: \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\") " pod="openstack/keystone-bootstrap-8fg8c" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.828421 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-fernet-keys\") pod \"keystone-bootstrap-8fg8c\" (UID: \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\") " pod="openstack/keystone-bootstrap-8fg8c" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.828479 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-config-data\") pod \"keystone-bootstrap-8fg8c\" (UID: \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\") " pod="openstack/keystone-bootstrap-8fg8c" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.828520 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-credential-keys\") pod \"keystone-bootstrap-8fg8c\" (UID: \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\") " pod="openstack/keystone-bootstrap-8fg8c" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.828543 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5nd5\" (UniqueName: \"kubernetes.io/projected/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-kube-api-access-m5nd5\") pod \"keystone-bootstrap-8fg8c\" (UID: \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\") " pod="openstack/keystone-bootstrap-8fg8c" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.848918 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-config-data\") pod \"keystone-bootstrap-8fg8c\" (UID: \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\") " pod="openstack/keystone-bootstrap-8fg8c" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.849408 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5nd5\" (UniqueName: \"kubernetes.io/projected/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-kube-api-access-m5nd5\") pod \"keystone-bootstrap-8fg8c\" (UID: \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\") " pod="openstack/keystone-bootstrap-8fg8c" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.849796 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-fernet-keys\") pod \"keystone-bootstrap-8fg8c\" (UID: \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\") " pod="openstack/keystone-bootstrap-8fg8c" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.849959 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-combined-ca-bundle\") pod \"keystone-bootstrap-8fg8c\" (UID: \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\") " pod="openstack/keystone-bootstrap-8fg8c" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.851681 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-scripts\") pod \"keystone-bootstrap-8fg8c\" (UID: \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\") " pod="openstack/keystone-bootstrap-8fg8c" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.854702 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-credential-keys\") pod \"keystone-bootstrap-8fg8c\" (UID: \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\") " pod="openstack/keystone-bootstrap-8fg8c" Feb 19 21:20:37 crc kubenswrapper[4886]: I0219 21:20:37.869036 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8fg8c" Feb 19 21:20:38 crc kubenswrapper[4886]: I0219 21:20:38.616374 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b4b3b88-fd86-414c-a709-6ebb939c03b7" path="/var/lib/kubelet/pods/1b4b3b88-fd86-414c-a709-6ebb939c03b7/volumes" Feb 19 21:20:38 crc kubenswrapper[4886]: I0219 21:20:38.617473 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5446e6a9-a091-4fa1-b7fc-9a5c0282c390" path="/var/lib/kubelet/pods/5446e6a9-a091-4fa1-b7fc-9a5c0282c390/volumes" Feb 19 21:20:38 crc kubenswrapper[4886]: I0219 21:20:38.618613 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9cb3a9c-8a46-4db4-9b02-f8b4f6493624" path="/var/lib/kubelet/pods/b9cb3a9c-8a46-4db4-9b02-f8b4f6493624/volumes" Feb 19 21:20:42 crc kubenswrapper[4886]: I0219 21:20:41.999669 4886 generic.go:334] "Generic (PLEG): container finished" podID="9f14cfdd-608a-42ab-9195-b9773729d874" containerID="ce48641e412b2ea34f23c4d2495d0621fa6f8ba7d771b867cbd84b8c6785402e" exitCode=0 Feb 19 21:20:42 crc kubenswrapper[4886]: I0219 21:20:41.999772 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jjrqx" event={"ID":"9f14cfdd-608a-42ab-9195-b9773729d874","Type":"ContainerDied","Data":"ce48641e412b2ea34f23c4d2495d0621fa6f8ba7d771b867cbd84b8c6785402e"} Feb 19 21:20:44 crc kubenswrapper[4886]: E0219 21:20:44.967674 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 19 21:20:44 crc kubenswrapper[4886]: E0219 21:20:44.968184 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kgcch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-xmr6n_openstack(83d1e69b-5951-43d6-a54b-c73956bb3356): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:20:44 crc kubenswrapper[4886]: E0219 21:20:44.969448 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-xmr6n" podUID="83d1e69b-5951-43d6-a54b-c73956bb3356" Feb 19 21:20:45 crc kubenswrapper[4886]: E0219 21:20:45.044029 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-xmr6n" podUID="83d1e69b-5951-43d6-a54b-c73956bb3356" Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.344576 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-29fmc" podUID="f2bb1ddd-f03a-4236-b0ea-b35e551916e5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.166:5353: i/o timeout" Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.344932 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-29fmc" Feb 19 21:20:45 crc kubenswrapper[4886]: E0219 21:20:45.531953 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Feb 19 21:20:45 crc kubenswrapper[4886]: E0219 21:20:45.532497 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lgkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-x27rt_openstack(336b4fc8-890f-4ace-baa3-587ebc3b27db): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:20:45 crc kubenswrapper[4886]: E0219 21:20:45.534149 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-x27rt" podUID="336b4fc8-890f-4ace-baa3-587ebc3b27db" Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.539769 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-29fmc" Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.547121 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jjrqx" Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.707902 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-dns-svc\") pod \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\" (UID: \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\") " Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.707968 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-config\") pod \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\" (UID: \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\") " Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.708019 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f14cfdd-608a-42ab-9195-b9773729d874-combined-ca-bundle\") pod \"9f14cfdd-608a-42ab-9195-b9773729d874\" (UID: \"9f14cfdd-608a-42ab-9195-b9773729d874\") " Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.708053 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-ovsdbserver-sb\") pod \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\" (UID: \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\") " Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.708117 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-ovsdbserver-nb\") pod \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\" (UID: \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\") " Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.708151 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x9cs\" (UniqueName: \"kubernetes.io/projected/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-kube-api-access-7x9cs\") pod \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\" (UID: \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\") " Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.708271 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f14cfdd-608a-42ab-9195-b9773729d874-config\") pod \"9f14cfdd-608a-42ab-9195-b9773729d874\" (UID: \"9f14cfdd-608a-42ab-9195-b9773729d874\") " Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.708370 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwkkb\" (UniqueName: \"kubernetes.io/projected/9f14cfdd-608a-42ab-9195-b9773729d874-kube-api-access-zwkkb\") pod \"9f14cfdd-608a-42ab-9195-b9773729d874\" (UID: \"9f14cfdd-608a-42ab-9195-b9773729d874\") " Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.708403 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-dns-swift-storage-0\") pod \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\" (UID: \"f2bb1ddd-f03a-4236-b0ea-b35e551916e5\") " Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.720690 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f14cfdd-608a-42ab-9195-b9773729d874-kube-api-access-zwkkb" (OuterVolumeSpecName: "kube-api-access-zwkkb") pod "9f14cfdd-608a-42ab-9195-b9773729d874" (UID: "9f14cfdd-608a-42ab-9195-b9773729d874"). InnerVolumeSpecName "kube-api-access-zwkkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.720858 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-kube-api-access-7x9cs" (OuterVolumeSpecName: "kube-api-access-7x9cs") pod "f2bb1ddd-f03a-4236-b0ea-b35e551916e5" (UID: "f2bb1ddd-f03a-4236-b0ea-b35e551916e5"). InnerVolumeSpecName "kube-api-access-7x9cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.737025 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f14cfdd-608a-42ab-9195-b9773729d874-config" (OuterVolumeSpecName: "config") pod "9f14cfdd-608a-42ab-9195-b9773729d874" (UID: "9f14cfdd-608a-42ab-9195-b9773729d874"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.751706 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f14cfdd-608a-42ab-9195-b9773729d874-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f14cfdd-608a-42ab-9195-b9773729d874" (UID: "9f14cfdd-608a-42ab-9195-b9773729d874"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.766913 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f2bb1ddd-f03a-4236-b0ea-b35e551916e5" (UID: "f2bb1ddd-f03a-4236-b0ea-b35e551916e5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.769928 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f2bb1ddd-f03a-4236-b0ea-b35e551916e5" (UID: "f2bb1ddd-f03a-4236-b0ea-b35e551916e5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.772887 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f2bb1ddd-f03a-4236-b0ea-b35e551916e5" (UID: "f2bb1ddd-f03a-4236-b0ea-b35e551916e5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.779721 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f2bb1ddd-f03a-4236-b0ea-b35e551916e5" (UID: "f2bb1ddd-f03a-4236-b0ea-b35e551916e5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.792695 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-config" (OuterVolumeSpecName: "config") pod "f2bb1ddd-f03a-4236-b0ea-b35e551916e5" (UID: "f2bb1ddd-f03a-4236-b0ea-b35e551916e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.810999 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.811025 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x9cs\" (UniqueName: \"kubernetes.io/projected/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-kube-api-access-7x9cs\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.811037 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f14cfdd-608a-42ab-9195-b9773729d874-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.811046 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwkkb\" (UniqueName: \"kubernetes.io/projected/9f14cfdd-608a-42ab-9195-b9773729d874-kube-api-access-zwkkb\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.811054 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.811063 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.811071 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.811079 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f14cfdd-608a-42ab-9195-b9773729d874-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:45 crc kubenswrapper[4886]: I0219 21:20:45.811086 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2bb1ddd-f03a-4236-b0ea-b35e551916e5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:46 crc kubenswrapper[4886]: I0219 21:20:46.055024 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-29fmc" event={"ID":"f2bb1ddd-f03a-4236-b0ea-b35e551916e5","Type":"ContainerDied","Data":"5f0d2077afc6ad99fc19cf667f4e4039c1c94661574c01181eda6e962060e477"} Feb 19 21:20:46 crc kubenswrapper[4886]: I0219 21:20:46.055098 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-29fmc" Feb 19 21:20:46 crc kubenswrapper[4886]: I0219 21:20:46.060906 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jjrqx" Feb 19 21:20:46 crc kubenswrapper[4886]: I0219 21:20:46.060908 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jjrqx" event={"ID":"9f14cfdd-608a-42ab-9195-b9773729d874","Type":"ContainerDied","Data":"79b2991deb184c924c60b10408337406ff6b37dd7506902b1facbb9a57e18e4b"} Feb 19 21:20:46 crc kubenswrapper[4886]: I0219 21:20:46.060982 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79b2991deb184c924c60b10408337406ff6b37dd7506902b1facbb9a57e18e4b" Feb 19 21:20:46 crc kubenswrapper[4886]: E0219 21:20:46.063599 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-x27rt" podUID="336b4fc8-890f-4ace-baa3-587ebc3b27db" Feb 19 21:20:46 crc kubenswrapper[4886]: I0219 21:20:46.111299 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-29fmc"] Feb 19 21:20:46 crc kubenswrapper[4886]: I0219 21:20:46.121552 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-29fmc"] Feb 19 21:20:46 crc kubenswrapper[4886]: I0219 21:20:46.625158 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2bb1ddd-f03a-4236-b0ea-b35e551916e5" path="/var/lib/kubelet/pods/f2bb1ddd-f03a-4236-b0ea-b35e551916e5/volumes" Feb 19 21:20:46 crc kubenswrapper[4886]: I0219 21:20:46.863020 4886 scope.go:117] "RemoveContainer" containerID="3d9a6d9fd438bffb9555d889339ca760fb9aea9bc8d865a7d47c31c6b0a442d5" Feb 19 21:20:46 crc kubenswrapper[4886]: I0219 21:20:46.870164 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-79lgs"] Feb 19 21:20:46 crc kubenswrapper[4886]: E0219 21:20:46.870738 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f14cfdd-608a-42ab-9195-b9773729d874" containerName="neutron-db-sync" Feb 19 21:20:46 crc kubenswrapper[4886]: I0219 21:20:46.870756 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f14cfdd-608a-42ab-9195-b9773729d874" containerName="neutron-db-sync" Feb 19 21:20:46 crc kubenswrapper[4886]: E0219 21:20:46.870781 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bb1ddd-f03a-4236-b0ea-b35e551916e5" containerName="init" Feb 19 21:20:46 crc kubenswrapper[4886]: I0219 21:20:46.870788 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bb1ddd-f03a-4236-b0ea-b35e551916e5" containerName="init" Feb 19 21:20:46 crc kubenswrapper[4886]: E0219 21:20:46.870815 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bb1ddd-f03a-4236-b0ea-b35e551916e5" containerName="dnsmasq-dns" Feb 19 21:20:46 crc kubenswrapper[4886]: I0219 21:20:46.870821 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bb1ddd-f03a-4236-b0ea-b35e551916e5" containerName="dnsmasq-dns" Feb 19 21:20:46 crc kubenswrapper[4886]: I0219 21:20:46.870993 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2bb1ddd-f03a-4236-b0ea-b35e551916e5" containerName="dnsmasq-dns" Feb 19 21:20:46 crc kubenswrapper[4886]: I0219 21:20:46.871024 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f14cfdd-608a-42ab-9195-b9773729d874" containerName="neutron-db-sync" Feb 19 21:20:46 crc kubenswrapper[4886]: I0219 21:20:46.872126 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-79lgs" Feb 19 21:20:46 crc kubenswrapper[4886]: I0219 21:20:46.901672 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-79lgs"] Feb 19 21:20:46 crc kubenswrapper[4886]: I0219 21:20:46.978650 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-58f7d46f6b-95bx9"] Feb 19 21:20:46 crc kubenswrapper[4886]: I0219 21:20:46.984633 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58f7d46f6b-95bx9" Feb 19 21:20:46 crc kubenswrapper[4886]: I0219 21:20:46.986386 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 21:20:46 crc kubenswrapper[4886]: I0219 21:20:46.986958 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-tcw9n" Feb 19 21:20:46 crc kubenswrapper[4886]: I0219 21:20:46.987092 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 19 21:20:46 crc kubenswrapper[4886]: I0219 21:20:46.987193 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.002486 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58f7d46f6b-95bx9"] Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.055851 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttcw2\" (UniqueName: \"kubernetes.io/projected/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-kube-api-access-ttcw2\") pod \"dnsmasq-dns-55f844cf75-79lgs\" (UID: \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\") " pod="openstack/dnsmasq-dns-55f844cf75-79lgs" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.056223 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-79lgs\" (UID: \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\") " pod="openstack/dnsmasq-dns-55f844cf75-79lgs" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.057973 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-dns-svc\") pod \"dnsmasq-dns-55f844cf75-79lgs\" (UID: \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\") " pod="openstack/dnsmasq-dns-55f844cf75-79lgs" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.058097 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-79lgs\" (UID: \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\") " pod="openstack/dnsmasq-dns-55f844cf75-79lgs" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.058211 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-config\") pod \"dnsmasq-dns-55f844cf75-79lgs\" (UID: \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\") " pod="openstack/dnsmasq-dns-55f844cf75-79lgs" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.058321 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-79lgs\" (UID: \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\") " pod="openstack/dnsmasq-dns-55f844cf75-79lgs" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.068213 4886 scope.go:117] "RemoveContainer" containerID="572c3f43da6f370faad3acb7c2d88115362f227242a124580889f9d170a14305" Feb 19 21:20:47 crc kubenswrapper[4886]: E0219 21:20:47.074859 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 19 21:20:47 crc kubenswrapper[4886]: E0219 21:20:47.075016 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcf8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-rlz62_openstack(956c70ec-60b5-4909-b686-66971581b168): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:20:47 crc kubenswrapper[4886]: E0219 21:20:47.078278 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-rlz62" podUID="956c70ec-60b5-4909-b686-66971581b168" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.160692 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-79lgs\" (UID: \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\") " pod="openstack/dnsmasq-dns-55f844cf75-79lgs" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.160774 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b04a1d9-e072-4510-92cf-a0698dd7acd7-ovndb-tls-certs\") pod \"neutron-58f7d46f6b-95bx9\" (UID: \"2b04a1d9-e072-4510-92cf-a0698dd7acd7\") " pod="openstack/neutron-58f7d46f6b-95bx9" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.160836 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-dns-svc\") pod \"dnsmasq-dns-55f844cf75-79lgs\" (UID: \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\") " pod="openstack/dnsmasq-dns-55f844cf75-79lgs" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.160868 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2b04a1d9-e072-4510-92cf-a0698dd7acd7-config\") pod \"neutron-58f7d46f6b-95bx9\" (UID: \"2b04a1d9-e072-4510-92cf-a0698dd7acd7\") " pod="openstack/neutron-58f7d46f6b-95bx9" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.160924 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-79lgs\" (UID: \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\") " pod="openstack/dnsmasq-dns-55f844cf75-79lgs" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.160948 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b04a1d9-e072-4510-92cf-a0698dd7acd7-combined-ca-bundle\") pod \"neutron-58f7d46f6b-95bx9\" (UID: \"2b04a1d9-e072-4510-92cf-a0698dd7acd7\") " pod="openstack/neutron-58f7d46f6b-95bx9" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.160980 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2b04a1d9-e072-4510-92cf-a0698dd7acd7-httpd-config\") pod \"neutron-58f7d46f6b-95bx9\" (UID: \"2b04a1d9-e072-4510-92cf-a0698dd7acd7\") " pod="openstack/neutron-58f7d46f6b-95bx9" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.161041 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-config\") pod \"dnsmasq-dns-55f844cf75-79lgs\" (UID: \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\") " pod="openstack/dnsmasq-dns-55f844cf75-79lgs" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.161095 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-79lgs\" (UID: \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\") " pod="openstack/dnsmasq-dns-55f844cf75-79lgs" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.161150 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twkbn\" (UniqueName: \"kubernetes.io/projected/2b04a1d9-e072-4510-92cf-a0698dd7acd7-kube-api-access-twkbn\") pod \"neutron-58f7d46f6b-95bx9\" (UID: \"2b04a1d9-e072-4510-92cf-a0698dd7acd7\") " pod="openstack/neutron-58f7d46f6b-95bx9" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.161194 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttcw2\" (UniqueName: \"kubernetes.io/projected/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-kube-api-access-ttcw2\") pod \"dnsmasq-dns-55f844cf75-79lgs\" (UID: \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\") " pod="openstack/dnsmasq-dns-55f844cf75-79lgs" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.162212 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-79lgs\" (UID: \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\") " pod="openstack/dnsmasq-dns-55f844cf75-79lgs" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.162224 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-79lgs\" (UID: \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\") " pod="openstack/dnsmasq-dns-55f844cf75-79lgs" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.162500 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-config\") pod \"dnsmasq-dns-55f844cf75-79lgs\" (UID: \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\") " pod="openstack/dnsmasq-dns-55f844cf75-79lgs" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.163187 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-79lgs\" (UID: \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\") " pod="openstack/dnsmasq-dns-55f844cf75-79lgs" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.163953 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-dns-svc\") pod \"dnsmasq-dns-55f844cf75-79lgs\" (UID: \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\") " pod="openstack/dnsmasq-dns-55f844cf75-79lgs" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.180019 4886 scope.go:117] "RemoveContainer" containerID="7b484b4b36625851f3c56a3b40ae3e90e01992563058b90e1177f46bfc45602c" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.213565 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttcw2\" (UniqueName: \"kubernetes.io/projected/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-kube-api-access-ttcw2\") pod \"dnsmasq-dns-55f844cf75-79lgs\" (UID: \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\") " pod="openstack/dnsmasq-dns-55f844cf75-79lgs" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.266873 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twkbn\" (UniqueName: \"kubernetes.io/projected/2b04a1d9-e072-4510-92cf-a0698dd7acd7-kube-api-access-twkbn\") pod \"neutron-58f7d46f6b-95bx9\" (UID: \"2b04a1d9-e072-4510-92cf-a0698dd7acd7\") " pod="openstack/neutron-58f7d46f6b-95bx9" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.267049 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b04a1d9-e072-4510-92cf-a0698dd7acd7-ovndb-tls-certs\") pod \"neutron-58f7d46f6b-95bx9\" (UID: \"2b04a1d9-e072-4510-92cf-a0698dd7acd7\") " pod="openstack/neutron-58f7d46f6b-95bx9" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.267143 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2b04a1d9-e072-4510-92cf-a0698dd7acd7-config\") pod \"neutron-58f7d46f6b-95bx9\" (UID: \"2b04a1d9-e072-4510-92cf-a0698dd7acd7\") " pod="openstack/neutron-58f7d46f6b-95bx9" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.267199 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b04a1d9-e072-4510-92cf-a0698dd7acd7-combined-ca-bundle\") pod \"neutron-58f7d46f6b-95bx9\" (UID: \"2b04a1d9-e072-4510-92cf-a0698dd7acd7\") " pod="openstack/neutron-58f7d46f6b-95bx9" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.267227 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2b04a1d9-e072-4510-92cf-a0698dd7acd7-httpd-config\") pod \"neutron-58f7d46f6b-95bx9\" (UID: \"2b04a1d9-e072-4510-92cf-a0698dd7acd7\") " pod="openstack/neutron-58f7d46f6b-95bx9" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.274006 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b04a1d9-e072-4510-92cf-a0698dd7acd7-combined-ca-bundle\") pod \"neutron-58f7d46f6b-95bx9\" (UID: \"2b04a1d9-e072-4510-92cf-a0698dd7acd7\") " pod="openstack/neutron-58f7d46f6b-95bx9" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.276426 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2b04a1d9-e072-4510-92cf-a0698dd7acd7-config\") pod \"neutron-58f7d46f6b-95bx9\" (UID: \"2b04a1d9-e072-4510-92cf-a0698dd7acd7\") " pod="openstack/neutron-58f7d46f6b-95bx9" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.279021 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b04a1d9-e072-4510-92cf-a0698dd7acd7-ovndb-tls-certs\") pod \"neutron-58f7d46f6b-95bx9\" (UID: \"2b04a1d9-e072-4510-92cf-a0698dd7acd7\") " pod="openstack/neutron-58f7d46f6b-95bx9" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.280583 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2b04a1d9-e072-4510-92cf-a0698dd7acd7-httpd-config\") pod \"neutron-58f7d46f6b-95bx9\" (UID: \"2b04a1d9-e072-4510-92cf-a0698dd7acd7\") " pod="openstack/neutron-58f7d46f6b-95bx9" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.295842 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twkbn\" (UniqueName: \"kubernetes.io/projected/2b04a1d9-e072-4510-92cf-a0698dd7acd7-kube-api-access-twkbn\") pod \"neutron-58f7d46f6b-95bx9\" (UID: \"2b04a1d9-e072-4510-92cf-a0698dd7acd7\") " pod="openstack/neutron-58f7d46f6b-95bx9" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.348734 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-79lgs" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.375445 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58f7d46f6b-95bx9" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.388840 4886 scope.go:117] "RemoveContainer" containerID="aa77ba5b5db818dabf56bfe61c4ddee323d1a77018c113fd78c042c9786b1786" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.472041 4886 scope.go:117] "RemoveContainer" containerID="ce4b951eb6b17dba557bef6c9f5bad151540f3499a1f9585e04c0b7c8f5b073a" Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.612583 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8fg8c"] Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.627800 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.800670 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 21:20:47 crc kubenswrapper[4886]: I0219 21:20:47.989194 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-79lgs"] Feb 19 21:20:48 crc kubenswrapper[4886]: I0219 21:20:48.148682 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8fg8c" event={"ID":"2cfadf95-ec64-4d3a-856f-22c0a4b65d54","Type":"ContainerStarted","Data":"b8a380af2e5ea1588188c064c9d3c87c613dad353837f23ae5e1330043ae6da7"} Feb 19 21:20:48 crc kubenswrapper[4886]: I0219 21:20:48.150165 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8fg8c" event={"ID":"2cfadf95-ec64-4d3a-856f-22c0a4b65d54","Type":"ContainerStarted","Data":"67d8eb13cb9a7fb2f4eaff70d483e99e4dc587b03188963766808c99286772e5"} Feb 19 21:20:48 crc kubenswrapper[4886]: I0219 21:20:48.152809 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d3201a6a-0782-4c3e-b43d-89ce0f4a029c","Type":"ContainerStarted","Data":"2d5567b8c0f4265b70f08a86fd4db7f6c240e6139de82bb83e6169ae004b0dbb"} Feb 19 21:20:48 crc kubenswrapper[4886]: I0219 21:20:48.156330 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4rq5v" event={"ID":"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7","Type":"ContainerStarted","Data":"dac5b9d178df92c1d37ce46f555f44e5975dba4d9bfdfb41e8612fece6ecfe80"} Feb 19 21:20:48 crc kubenswrapper[4886]: I0219 21:20:48.161954 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38d7aae-1881-4142-a122-f3bdbaf8fbcb","Type":"ContainerStarted","Data":"f7df434bd430e1a2b5656b814f4c72ba8b71906ae3ac9e7c630c8aadab0d9204"} Feb 19 21:20:48 crc kubenswrapper[4886]: I0219 21:20:48.165070 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-79lgs" event={"ID":"9596f93c-54cf-4dfd-b8f7-f6a6c19be356","Type":"ContainerStarted","Data":"e7f6321614df36014e9429b06e127219b5e16816e1f8824eb874466e1ce79cbe"} Feb 19 21:20:48 crc kubenswrapper[4886]: I0219 21:20:48.189101 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45","Type":"ContainerStarted","Data":"8c91043156607e89e0de683b411f1a5970c6664f8c3dbb3e7dc6c4c4864cee03"} Feb 19 21:20:48 crc kubenswrapper[4886]: I0219 21:20:48.190806 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-8fg8c" podStartSLOduration=11.190784691 podStartE2EDuration="11.190784691s" podCreationTimestamp="2026-02-19 21:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:20:48.165589429 +0000 UTC m=+1278.793432479" watchObservedRunningTime="2026-02-19 21:20:48.190784691 +0000 UTC m=+1278.818627751" Feb 19 21:20:48 crc kubenswrapper[4886]: E0219 21:20:48.196747 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-rlz62" podUID="956c70ec-60b5-4909-b686-66971581b168" Feb 19 21:20:48 crc kubenswrapper[4886]: W0219 21:20:48.227385 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b04a1d9_e072_4510_92cf_a0698dd7acd7.slice/crio-be6ae9723bd016c2db6c438278c28309536cf60edaa13da682f9ee122a08308a WatchSource:0}: Error finding container be6ae9723bd016c2db6c438278c28309536cf60edaa13da682f9ee122a08308a: Status 404 returned error can't find the container with id be6ae9723bd016c2db6c438278c28309536cf60edaa13da682f9ee122a08308a Feb 19 21:20:48 crc kubenswrapper[4886]: I0219 21:20:48.231409 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58f7d46f6b-95bx9"] Feb 19 21:20:48 crc kubenswrapper[4886]: I0219 21:20:48.241103 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-4rq5v" podStartSLOduration=5.483169229 podStartE2EDuration="31.241086693s" podCreationTimestamp="2026-02-19 21:20:17 +0000 UTC" firstStartedPulling="2026-02-19 21:20:19.17294401 +0000 UTC m=+1249.800787060" lastFinishedPulling="2026-02-19 21:20:44.930861444 +0000 UTC m=+1275.558704524" observedRunningTime="2026-02-19 21:20:48.191515839 +0000 UTC m=+1278.819358899" watchObservedRunningTime="2026-02-19 21:20:48.241086693 +0000 UTC m=+1278.868929743" Feb 19 21:20:48 crc kubenswrapper[4886]: I0219 21:20:48.337154 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:20:48 crc kubenswrapper[4886]: I0219 21:20:48.337215 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:20:48 crc kubenswrapper[4886]: I0219 21:20:48.798694 4886 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","poda61930df-09b7-4635-a89f-71207b2f4e01"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort poda61930df-09b7-4635-a89f-71207b2f4e01] : Timed out while waiting for systemd to remove kubepods-besteffort-poda61930df_09b7_4635_a89f_71207b2f4e01.slice" Feb 19 21:20:48 crc kubenswrapper[4886]: E0219 21:20:48.798979 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort poda61930df-09b7-4635-a89f-71207b2f4e01] : unable to destroy cgroup paths for cgroup [kubepods besteffort poda61930df-09b7-4635-a89f-71207b2f4e01] : Timed out while waiting for systemd to remove kubepods-besteffort-poda61930df_09b7_4635_a89f_71207b2f4e01.slice" pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" podUID="a61930df-09b7-4635-a89f-71207b2f4e01" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.231790 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45","Type":"ContainerStarted","Data":"73747f99b4299a1670672efdc0f314bbf5a1a7d673de9b869d235d9a7e9d68b3"} Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.242039 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58f7d46f6b-95bx9" event={"ID":"2b04a1d9-e072-4510-92cf-a0698dd7acd7","Type":"ContainerStarted","Data":"3d4e8633cc2fd86c5b9b2b301ed0802ce18bb3059db2294553022197e3c44e8b"} Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.242081 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58f7d46f6b-95bx9" event={"ID":"2b04a1d9-e072-4510-92cf-a0698dd7acd7","Type":"ContainerStarted","Data":"be6ae9723bd016c2db6c438278c28309536cf60edaa13da682f9ee122a08308a"} Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.251217 4886 generic.go:334] "Generic (PLEG): container finished" podID="9596f93c-54cf-4dfd-b8f7-f6a6c19be356" containerID="b7fb84b92549d696443cc10f882ef5c8962cb1c8af8d8e284870dcc9a08515b0" exitCode=0 Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.251318 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-79lgs" event={"ID":"9596f93c-54cf-4dfd-b8f7-f6a6c19be356","Type":"ContainerDied","Data":"b7fb84b92549d696443cc10f882ef5c8962cb1c8af8d8e284870dcc9a08515b0"} Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.258229 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d3201a6a-0782-4c3e-b43d-89ce0f4a029c","Type":"ContainerStarted","Data":"2f04fff7258c62bb56b4027d02fed09cb706df34b4a1d1aaaf0ea4be84c5da4f"} Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.258298 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-xck4z" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.272991 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-597bbcffd5-5t75q"] Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.274682 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.284790 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.285395 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.302124 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-597bbcffd5-5t75q"] Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.456474 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8vxk\" (UniqueName: \"kubernetes.io/projected/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-kube-api-access-p8vxk\") pod \"neutron-597bbcffd5-5t75q\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.456825 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-public-tls-certs\") pod \"neutron-597bbcffd5-5t75q\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.456859 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-combined-ca-bundle\") pod \"neutron-597bbcffd5-5t75q\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.456896 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-httpd-config\") pod \"neutron-597bbcffd5-5t75q\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.456923 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-internal-tls-certs\") pod \"neutron-597bbcffd5-5t75q\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.456981 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-config\") pod \"neutron-597bbcffd5-5t75q\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.457023 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-ovndb-tls-certs\") pod \"neutron-597bbcffd5-5t75q\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.468959 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-xck4z"] Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.484875 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-xck4z"] Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.559523 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-config\") pod \"neutron-597bbcffd5-5t75q\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.559916 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-ovndb-tls-certs\") pod \"neutron-597bbcffd5-5t75q\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.560007 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8vxk\" (UniqueName: \"kubernetes.io/projected/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-kube-api-access-p8vxk\") pod \"neutron-597bbcffd5-5t75q\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.560125 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-public-tls-certs\") pod \"neutron-597bbcffd5-5t75q\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.560163 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-combined-ca-bundle\") pod \"neutron-597bbcffd5-5t75q\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.560214 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-httpd-config\") pod \"neutron-597bbcffd5-5t75q\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.560248 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-internal-tls-certs\") pod \"neutron-597bbcffd5-5t75q\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.566920 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-config\") pod \"neutron-597bbcffd5-5t75q\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.567411 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-internal-tls-certs\") pod \"neutron-597bbcffd5-5t75q\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.567551 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-httpd-config\") pod \"neutron-597bbcffd5-5t75q\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.570237 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-ovndb-tls-certs\") pod \"neutron-597bbcffd5-5t75q\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.575618 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-combined-ca-bundle\") pod \"neutron-597bbcffd5-5t75q\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.579232 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-public-tls-certs\") pod \"neutron-597bbcffd5-5t75q\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.585708 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8vxk\" (UniqueName: \"kubernetes.io/projected/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-kube-api-access-p8vxk\") pod \"neutron-597bbcffd5-5t75q\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:20:49 crc kubenswrapper[4886]: I0219 21:20:49.761044 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:20:50 crc kubenswrapper[4886]: I0219 21:20:50.274785 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-79lgs" event={"ID":"9596f93c-54cf-4dfd-b8f7-f6a6c19be356","Type":"ContainerStarted","Data":"c433d8eb4b1761d7fcbad1a85aa29e7185d1d90f9e2bbf54db0a7722ea9824e8"} Feb 19 21:20:50 crc kubenswrapper[4886]: I0219 21:20:50.276048 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-79lgs" Feb 19 21:20:50 crc kubenswrapper[4886]: I0219 21:20:50.293192 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d3201a6a-0782-4c3e-b43d-89ce0f4a029c","Type":"ContainerStarted","Data":"d7b02b86404b7721f6376a8a8b25168dc3a5997994841ee83aa3cf4a782c738b"} Feb 19 21:20:50 crc kubenswrapper[4886]: I0219 21:20:50.306740 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45","Type":"ContainerStarted","Data":"e9893877ff85644672925b9cc2bcede1013c50e3646420cc1b7c4a97b98f1b09"} Feb 19 21:20:50 crc kubenswrapper[4886]: I0219 21:20:50.318571 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58f7d46f6b-95bx9" event={"ID":"2b04a1d9-e072-4510-92cf-a0698dd7acd7","Type":"ContainerStarted","Data":"0c9890079ee151c143baf5edccbdf22b3f5baacb3e75321af03acc4f7e1fb033"} Feb 19 21:20:50 crc kubenswrapper[4886]: I0219 21:20:50.320106 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-58f7d46f6b-95bx9" Feb 19 21:20:50 crc kubenswrapper[4886]: I0219 21:20:50.320580 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-79lgs" podStartSLOduration=4.320564355 podStartE2EDuration="4.320564355s" podCreationTimestamp="2026-02-19 21:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:20:50.303495603 +0000 UTC m=+1280.931338673" watchObservedRunningTime="2026-02-19 21:20:50.320564355 +0000 UTC m=+1280.948407405" Feb 19 21:20:50 crc kubenswrapper[4886]: I0219 21:20:50.328686 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=14.328646114 podStartE2EDuration="14.328646114s" podCreationTimestamp="2026-02-19 21:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:20:50.319460228 +0000 UTC m=+1280.947303268" watchObservedRunningTime="2026-02-19 21:20:50.328646114 +0000 UTC m=+1280.956489164" Feb 19 21:20:50 crc kubenswrapper[4886]: I0219 21:20:50.345904 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-29fmc" podUID="f2bb1ddd-f03a-4236-b0ea-b35e551916e5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.166:5353: i/o timeout" Feb 19 21:20:50 crc kubenswrapper[4886]: I0219 21:20:50.352562 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-58f7d46f6b-95bx9" podStartSLOduration=4.352538844 podStartE2EDuration="4.352538844s" podCreationTimestamp="2026-02-19 21:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:20:50.342403504 +0000 UTC m=+1280.970246564" watchObservedRunningTime="2026-02-19 21:20:50.352538844 +0000 UTC m=+1280.980381884" Feb 19 21:20:50 crc kubenswrapper[4886]: I0219 21:20:50.373282 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=14.373265316 podStartE2EDuration="14.373265316s" podCreationTimestamp="2026-02-19 21:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:20:50.369622216 +0000 UTC m=+1280.997465276" watchObservedRunningTime="2026-02-19 21:20:50.373265316 +0000 UTC m=+1281.001108366" Feb 19 21:20:50 crc kubenswrapper[4886]: I0219 21:20:50.614018 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a61930df-09b7-4635-a89f-71207b2f4e01" path="/var/lib/kubelet/pods/a61930df-09b7-4635-a89f-71207b2f4e01/volumes" Feb 19 21:20:51 crc kubenswrapper[4886]: I0219 21:20:51.902595 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-597bbcffd5-5t75q"] Feb 19 21:20:52 crc kubenswrapper[4886]: I0219 21:20:52.342361 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-597bbcffd5-5t75q" event={"ID":"80cfc748-e684-4b36-8c1c-a22a1ebaf84c","Type":"ContainerStarted","Data":"2982cf6a6d9667622f1d05f11777ad97a290cdc543edecd5ccdd02b07381881f"} Feb 19 21:20:52 crc kubenswrapper[4886]: I0219 21:20:52.342786 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-597bbcffd5-5t75q" event={"ID":"80cfc748-e684-4b36-8c1c-a22a1ebaf84c","Type":"ContainerStarted","Data":"f2eedbea633b3f110cbf3df8f887d7d0836d25ff5ddbb15ddd434a43cd2ea5a2"} Feb 19 21:20:53 crc kubenswrapper[4886]: I0219 21:20:53.359292 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-597bbcffd5-5t75q" event={"ID":"80cfc748-e684-4b36-8c1c-a22a1ebaf84c","Type":"ContainerStarted","Data":"717ec4a9a5e32c5a2f84635678a193f6baadcec7644464e6d2fca315a33a67c5"} Feb 19 21:20:53 crc kubenswrapper[4886]: I0219 21:20:53.359552 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:20:53 crc kubenswrapper[4886]: I0219 21:20:53.362152 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38d7aae-1881-4142-a122-f3bdbaf8fbcb","Type":"ContainerStarted","Data":"76e28b51db82e8dd635d56284b5ce75115175cd16842335e6012ad0356d11f1c"} Feb 19 21:20:53 crc kubenswrapper[4886]: I0219 21:20:53.397781 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-597bbcffd5-5t75q" podStartSLOduration=4.397762751 podStartE2EDuration="4.397762751s" podCreationTimestamp="2026-02-19 21:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:20:53.389862496 +0000 UTC m=+1284.017705546" watchObservedRunningTime="2026-02-19 21:20:53.397762751 +0000 UTC m=+1284.025605801" Feb 19 21:20:55 crc kubenswrapper[4886]: I0219 21:20:55.397822 4886 generic.go:334] "Generic (PLEG): container finished" podID="2cfadf95-ec64-4d3a-856f-22c0a4b65d54" containerID="b8a380af2e5ea1588188c064c9d3c87c613dad353837f23ae5e1330043ae6da7" exitCode=0 Feb 19 21:20:55 crc kubenswrapper[4886]: I0219 21:20:55.397918 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8fg8c" event={"ID":"2cfadf95-ec64-4d3a-856f-22c0a4b65d54","Type":"ContainerDied","Data":"b8a380af2e5ea1588188c064c9d3c87c613dad353837f23ae5e1330043ae6da7"} Feb 19 21:20:57 crc kubenswrapper[4886]: I0219 21:20:57.354444 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-79lgs" Feb 19 21:20:57 crc kubenswrapper[4886]: I0219 21:20:57.382474 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 21:20:57 crc kubenswrapper[4886]: I0219 21:20:57.382650 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 21:20:57 crc kubenswrapper[4886]: I0219 21:20:57.468478 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 21:20:57 crc kubenswrapper[4886]: I0219 21:20:57.469805 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 21:20:57 crc kubenswrapper[4886]: I0219 21:20:57.506974 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 21:20:57 crc kubenswrapper[4886]: I0219 21:20:57.509126 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-kts74"] Feb 19 21:20:57 crc kubenswrapper[4886]: I0219 21:20:57.509383 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" podUID="9ccc4c92-a3b0-47ea-9620-830916d087ab" containerName="dnsmasq-dns" containerID="cri-o://73b93dbc355b54e7626fcb45559e72b99a024f171194d737f734b86fd4590916" gracePeriod=10 Feb 19 21:20:57 crc kubenswrapper[4886]: I0219 21:20:57.510001 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 21:20:57 crc kubenswrapper[4886]: I0219 21:20:57.537564 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 21:20:57 crc kubenswrapper[4886]: I0219 21:20:57.537655 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 21:20:57 crc kubenswrapper[4886]: I0219 21:20:57.545066 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 21:20:57 crc kubenswrapper[4886]: I0219 21:20:57.810414 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" podUID="9ccc4c92-a3b0-47ea-9620-830916d087ab" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.185:5353: connect: connection refused" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.454169 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8fg8c" event={"ID":"2cfadf95-ec64-4d3a-856f-22c0a4b65d54","Type":"ContainerDied","Data":"67d8eb13cb9a7fb2f4eaff70d483e99e4dc587b03188963766808c99286772e5"} Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.454231 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67d8eb13cb9a7fb2f4eaff70d483e99e4dc587b03188963766808c99286772e5" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.457588 4886 generic.go:334] "Generic (PLEG): container finished" podID="9ccc4c92-a3b0-47ea-9620-830916d087ab" containerID="73b93dbc355b54e7626fcb45559e72b99a024f171194d737f734b86fd4590916" exitCode=0 Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.457666 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" event={"ID":"9ccc4c92-a3b0-47ea-9620-830916d087ab","Type":"ContainerDied","Data":"73b93dbc355b54e7626fcb45559e72b99a024f171194d737f734b86fd4590916"} Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.458199 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.458784 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.458802 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.469627 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8fg8c" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.490787 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5nd5\" (UniqueName: \"kubernetes.io/projected/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-kube-api-access-m5nd5\") pod \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\" (UID: \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\") " Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.490857 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-credential-keys\") pod \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\" (UID: \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\") " Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.490885 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-fernet-keys\") pod \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\" (UID: \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\") " Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.490901 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-scripts\") pod \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\" (UID: \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\") " Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.490974 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-combined-ca-bundle\") pod \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\" (UID: \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\") " Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.491024 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-config-data\") pod \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\" (UID: \"2cfadf95-ec64-4d3a-856f-22c0a4b65d54\") " Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.500651 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-scripts" (OuterVolumeSpecName: "scripts") pod "2cfadf95-ec64-4d3a-856f-22c0a4b65d54" (UID: "2cfadf95-ec64-4d3a-856f-22c0a4b65d54"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.510630 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2cfadf95-ec64-4d3a-856f-22c0a4b65d54" (UID: "2cfadf95-ec64-4d3a-856f-22c0a4b65d54"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.538691 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-kube-api-access-m5nd5" (OuterVolumeSpecName: "kube-api-access-m5nd5") pod "2cfadf95-ec64-4d3a-856f-22c0a4b65d54" (UID: "2cfadf95-ec64-4d3a-856f-22c0a4b65d54"). InnerVolumeSpecName "kube-api-access-m5nd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.557435 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2cfadf95-ec64-4d3a-856f-22c0a4b65d54" (UID: "2cfadf95-ec64-4d3a-856f-22c0a4b65d54"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.572344 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-config-data" (OuterVolumeSpecName: "config-data") pod "2cfadf95-ec64-4d3a-856f-22c0a4b65d54" (UID: "2cfadf95-ec64-4d3a-856f-22c0a4b65d54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.591584 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2cfadf95-ec64-4d3a-856f-22c0a4b65d54" (UID: "2cfadf95-ec64-4d3a-856f-22c0a4b65d54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.594940 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.594972 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.594981 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5nd5\" (UniqueName: \"kubernetes.io/projected/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-kube-api-access-m5nd5\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.594992 4886 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.595000 4886 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.595008 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cfadf95-ec64-4d3a-856f-22c0a4b65d54-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.779759 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.798177 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-ovsdbserver-nb\") pod \"9ccc4c92-a3b0-47ea-9620-830916d087ab\" (UID: \"9ccc4c92-a3b0-47ea-9620-830916d087ab\") " Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.798224 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-dns-svc\") pod \"9ccc4c92-a3b0-47ea-9620-830916d087ab\" (UID: \"9ccc4c92-a3b0-47ea-9620-830916d087ab\") " Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.798338 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-config\") pod \"9ccc4c92-a3b0-47ea-9620-830916d087ab\" (UID: \"9ccc4c92-a3b0-47ea-9620-830916d087ab\") " Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.798363 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-dns-swift-storage-0\") pod \"9ccc4c92-a3b0-47ea-9620-830916d087ab\" (UID: \"9ccc4c92-a3b0-47ea-9620-830916d087ab\") " Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.798398 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j46rl\" (UniqueName: \"kubernetes.io/projected/9ccc4c92-a3b0-47ea-9620-830916d087ab-kube-api-access-j46rl\") pod \"9ccc4c92-a3b0-47ea-9620-830916d087ab\" (UID: \"9ccc4c92-a3b0-47ea-9620-830916d087ab\") " Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.798614 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-ovsdbserver-sb\") pod \"9ccc4c92-a3b0-47ea-9620-830916d087ab\" (UID: \"9ccc4c92-a3b0-47ea-9620-830916d087ab\") " Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.802719 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ccc4c92-a3b0-47ea-9620-830916d087ab-kube-api-access-j46rl" (OuterVolumeSpecName: "kube-api-access-j46rl") pod "9ccc4c92-a3b0-47ea-9620-830916d087ab" (UID: "9ccc4c92-a3b0-47ea-9620-830916d087ab"). InnerVolumeSpecName "kube-api-access-j46rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.865499 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9ccc4c92-a3b0-47ea-9620-830916d087ab" (UID: "9ccc4c92-a3b0-47ea-9620-830916d087ab"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.867783 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9ccc4c92-a3b0-47ea-9620-830916d087ab" (UID: "9ccc4c92-a3b0-47ea-9620-830916d087ab"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.872158 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-config" (OuterVolumeSpecName: "config") pod "9ccc4c92-a3b0-47ea-9620-830916d087ab" (UID: "9ccc4c92-a3b0-47ea-9620-830916d087ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.901860 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9ccc4c92-a3b0-47ea-9620-830916d087ab" (UID: "9ccc4c92-a3b0-47ea-9620-830916d087ab"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.904743 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.905044 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.905118 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.905174 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j46rl\" (UniqueName: \"kubernetes.io/projected/9ccc4c92-a3b0-47ea-9620-830916d087ab-kube-api-access-j46rl\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.905337 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:58 crc kubenswrapper[4886]: I0219 21:20:58.904919 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9ccc4c92-a3b0-47ea-9620-830916d087ab" (UID: "9ccc4c92-a3b0-47ea-9620-830916d087ab"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.009931 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ccc4c92-a3b0-47ea-9620-830916d087ab-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.473656 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" event={"ID":"9ccc4c92-a3b0-47ea-9620-830916d087ab","Type":"ContainerDied","Data":"7e89a158cbc86696cf52aa20cde2ed5a1e6bfbd721d45c4c6ad555404e858a10"} Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.473672 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8fg8c" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.473719 4886 scope.go:117] "RemoveContainer" containerID="73b93dbc355b54e7626fcb45559e72b99a024f171194d737f734b86fd4590916" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.473931 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.475188 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-kts74" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.585021 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-kts74"] Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.597354 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-kts74"] Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.683097 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5db54896d6-4qqcq"] Feb 19 21:20:59 crc kubenswrapper[4886]: E0219 21:20:59.684030 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cfadf95-ec64-4d3a-856f-22c0a4b65d54" containerName="keystone-bootstrap" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.684048 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cfadf95-ec64-4d3a-856f-22c0a4b65d54" containerName="keystone-bootstrap" Feb 19 21:20:59 crc kubenswrapper[4886]: E0219 21:20:59.684071 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ccc4c92-a3b0-47ea-9620-830916d087ab" containerName="dnsmasq-dns" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.684079 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ccc4c92-a3b0-47ea-9620-830916d087ab" containerName="dnsmasq-dns" Feb 19 21:20:59 crc kubenswrapper[4886]: E0219 21:20:59.684128 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ccc4c92-a3b0-47ea-9620-830916d087ab" containerName="init" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.684136 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ccc4c92-a3b0-47ea-9620-830916d087ab" containerName="init" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.684387 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cfadf95-ec64-4d3a-856f-22c0a4b65d54" containerName="keystone-bootstrap" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.684414 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ccc4c92-a3b0-47ea-9620-830916d087ab" containerName="dnsmasq-dns" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.685523 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.688177 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.688416 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.688650 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.688807 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gw5vn" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.688846 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.688926 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.710002 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5db54896d6-4qqcq"] Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.844388 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4ad4df85-25c1-471c-8122-ef562a995a6a-credential-keys\") pod \"keystone-5db54896d6-4qqcq\" (UID: \"4ad4df85-25c1-471c-8122-ef562a995a6a\") " pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.844481 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtgcg\" (UniqueName: \"kubernetes.io/projected/4ad4df85-25c1-471c-8122-ef562a995a6a-kube-api-access-qtgcg\") pod \"keystone-5db54896d6-4qqcq\" (UID: \"4ad4df85-25c1-471c-8122-ef562a995a6a\") " pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.844539 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ad4df85-25c1-471c-8122-ef562a995a6a-internal-tls-certs\") pod \"keystone-5db54896d6-4qqcq\" (UID: \"4ad4df85-25c1-471c-8122-ef562a995a6a\") " pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.844604 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ad4df85-25c1-471c-8122-ef562a995a6a-config-data\") pod \"keystone-5db54896d6-4qqcq\" (UID: \"4ad4df85-25c1-471c-8122-ef562a995a6a\") " pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.844646 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4ad4df85-25c1-471c-8122-ef562a995a6a-fernet-keys\") pod \"keystone-5db54896d6-4qqcq\" (UID: \"4ad4df85-25c1-471c-8122-ef562a995a6a\") " pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.844673 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ad4df85-25c1-471c-8122-ef562a995a6a-public-tls-certs\") pod \"keystone-5db54896d6-4qqcq\" (UID: \"4ad4df85-25c1-471c-8122-ef562a995a6a\") " pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.844754 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad4df85-25c1-471c-8122-ef562a995a6a-combined-ca-bundle\") pod \"keystone-5db54896d6-4qqcq\" (UID: \"4ad4df85-25c1-471c-8122-ef562a995a6a\") " pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.844853 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ad4df85-25c1-471c-8122-ef562a995a6a-scripts\") pod \"keystone-5db54896d6-4qqcq\" (UID: \"4ad4df85-25c1-471c-8122-ef562a995a6a\") " pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.947307 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ad4df85-25c1-471c-8122-ef562a995a6a-scripts\") pod \"keystone-5db54896d6-4qqcq\" (UID: \"4ad4df85-25c1-471c-8122-ef562a995a6a\") " pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.947381 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4ad4df85-25c1-471c-8122-ef562a995a6a-credential-keys\") pod \"keystone-5db54896d6-4qqcq\" (UID: \"4ad4df85-25c1-471c-8122-ef562a995a6a\") " pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.947421 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtgcg\" (UniqueName: \"kubernetes.io/projected/4ad4df85-25c1-471c-8122-ef562a995a6a-kube-api-access-qtgcg\") pod \"keystone-5db54896d6-4qqcq\" (UID: \"4ad4df85-25c1-471c-8122-ef562a995a6a\") " pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.947471 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ad4df85-25c1-471c-8122-ef562a995a6a-internal-tls-certs\") pod \"keystone-5db54896d6-4qqcq\" (UID: \"4ad4df85-25c1-471c-8122-ef562a995a6a\") " pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.947520 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ad4df85-25c1-471c-8122-ef562a995a6a-config-data\") pod \"keystone-5db54896d6-4qqcq\" (UID: \"4ad4df85-25c1-471c-8122-ef562a995a6a\") " pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.947568 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4ad4df85-25c1-471c-8122-ef562a995a6a-fernet-keys\") pod \"keystone-5db54896d6-4qqcq\" (UID: \"4ad4df85-25c1-471c-8122-ef562a995a6a\") " pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.947598 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ad4df85-25c1-471c-8122-ef562a995a6a-public-tls-certs\") pod \"keystone-5db54896d6-4qqcq\" (UID: \"4ad4df85-25c1-471c-8122-ef562a995a6a\") " pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.947651 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad4df85-25c1-471c-8122-ef562a995a6a-combined-ca-bundle\") pod \"keystone-5db54896d6-4qqcq\" (UID: \"4ad4df85-25c1-471c-8122-ef562a995a6a\") " pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.952989 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad4df85-25c1-471c-8122-ef562a995a6a-combined-ca-bundle\") pod \"keystone-5db54896d6-4qqcq\" (UID: \"4ad4df85-25c1-471c-8122-ef562a995a6a\") " pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.954241 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ad4df85-25c1-471c-8122-ef562a995a6a-internal-tls-certs\") pod \"keystone-5db54896d6-4qqcq\" (UID: \"4ad4df85-25c1-471c-8122-ef562a995a6a\") " pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.954747 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4ad4df85-25c1-471c-8122-ef562a995a6a-fernet-keys\") pod \"keystone-5db54896d6-4qqcq\" (UID: \"4ad4df85-25c1-471c-8122-ef562a995a6a\") " pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.955233 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ad4df85-25c1-471c-8122-ef562a995a6a-scripts\") pod \"keystone-5db54896d6-4qqcq\" (UID: \"4ad4df85-25c1-471c-8122-ef562a995a6a\") " pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.961659 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4ad4df85-25c1-471c-8122-ef562a995a6a-credential-keys\") pod \"keystone-5db54896d6-4qqcq\" (UID: \"4ad4df85-25c1-471c-8122-ef562a995a6a\") " pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.967198 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ad4df85-25c1-471c-8122-ef562a995a6a-config-data\") pod \"keystone-5db54896d6-4qqcq\" (UID: \"4ad4df85-25c1-471c-8122-ef562a995a6a\") " pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.967798 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ad4df85-25c1-471c-8122-ef562a995a6a-public-tls-certs\") pod \"keystone-5db54896d6-4qqcq\" (UID: \"4ad4df85-25c1-471c-8122-ef562a995a6a\") " pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:20:59 crc kubenswrapper[4886]: I0219 21:20:59.972554 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtgcg\" (UniqueName: \"kubernetes.io/projected/4ad4df85-25c1-471c-8122-ef562a995a6a-kube-api-access-qtgcg\") pod \"keystone-5db54896d6-4qqcq\" (UID: \"4ad4df85-25c1-471c-8122-ef562a995a6a\") " pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:21:00 crc kubenswrapper[4886]: I0219 21:21:00.018738 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:21:00 crc kubenswrapper[4886]: I0219 21:21:00.491753 4886 generic.go:334] "Generic (PLEG): container finished" podID="2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7" containerID="dac5b9d178df92c1d37ce46f555f44e5975dba4d9bfdfb41e8612fece6ecfe80" exitCode=0 Feb 19 21:21:00 crc kubenswrapper[4886]: I0219 21:21:00.491842 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4rq5v" event={"ID":"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7","Type":"ContainerDied","Data":"dac5b9d178df92c1d37ce46f555f44e5975dba4d9bfdfb41e8612fece6ecfe80"} Feb 19 21:21:00 crc kubenswrapper[4886]: I0219 21:21:00.619881 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ccc4c92-a3b0-47ea-9620-830916d087ab" path="/var/lib/kubelet/pods/9ccc4c92-a3b0-47ea-9620-830916d087ab/volumes" Feb 19 21:21:02 crc kubenswrapper[4886]: I0219 21:21:02.114760 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 21:21:02 crc kubenswrapper[4886]: I0219 21:21:02.114881 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 21:21:02 crc kubenswrapper[4886]: I0219 21:21:02.155423 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 21:21:02 crc kubenswrapper[4886]: I0219 21:21:02.156015 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 21:21:02 crc kubenswrapper[4886]: I0219 21:21:02.158224 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 21:21:02 crc kubenswrapper[4886]: I0219 21:21:02.396613 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 21:21:04 crc kubenswrapper[4886]: I0219 21:21:04.559282 4886 scope.go:117] "RemoveContainer" containerID="70e67ead190e42e0102382fc128f4301f3c7c581f909c95b2138b1490779e552" Feb 19 21:21:04 crc kubenswrapper[4886]: I0219 21:21:04.566340 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4rq5v" event={"ID":"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7","Type":"ContainerDied","Data":"bf59df88c26bb8617996c22bfccd1d2dd6b6bc98d13f5b1882248dfbd29860e1"} Feb 19 21:21:04 crc kubenswrapper[4886]: I0219 21:21:04.566378 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf59df88c26bb8617996c22bfccd1d2dd6b6bc98d13f5b1882248dfbd29860e1" Feb 19 21:21:04 crc kubenswrapper[4886]: I0219 21:21:04.660504 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4rq5v" Feb 19 21:21:04 crc kubenswrapper[4886]: I0219 21:21:04.809840 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-combined-ca-bundle\") pod \"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7\" (UID: \"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7\") " Feb 19 21:21:04 crc kubenswrapper[4886]: I0219 21:21:04.810944 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-config-data\") pod \"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7\" (UID: \"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7\") " Feb 19 21:21:04 crc kubenswrapper[4886]: I0219 21:21:04.811170 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jwvl\" (UniqueName: \"kubernetes.io/projected/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-kube-api-access-5jwvl\") pod \"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7\" (UID: \"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7\") " Feb 19 21:21:04 crc kubenswrapper[4886]: I0219 21:21:04.811502 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-scripts\") pod \"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7\" (UID: \"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7\") " Feb 19 21:21:04 crc kubenswrapper[4886]: I0219 21:21:04.811537 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-logs\") pod \"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7\" (UID: \"2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7\") " Feb 19 21:21:04 crc kubenswrapper[4886]: I0219 21:21:04.812156 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-logs" (OuterVolumeSpecName: "logs") pod "2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7" (UID: "2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:21:04 crc kubenswrapper[4886]: I0219 21:21:04.812489 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-logs\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:04 crc kubenswrapper[4886]: I0219 21:21:04.816370 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-kube-api-access-5jwvl" (OuterVolumeSpecName: "kube-api-access-5jwvl") pod "2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7" (UID: "2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7"). InnerVolumeSpecName "kube-api-access-5jwvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:21:04 crc kubenswrapper[4886]: I0219 21:21:04.826055 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-scripts" (OuterVolumeSpecName: "scripts") pod "2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7" (UID: "2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:04 crc kubenswrapper[4886]: I0219 21:21:04.859798 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-config-data" (OuterVolumeSpecName: "config-data") pod "2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7" (UID: "2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:04 crc kubenswrapper[4886]: I0219 21:21:04.881657 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7" (UID: "2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:04 crc kubenswrapper[4886]: I0219 21:21:04.920887 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:04 crc kubenswrapper[4886]: I0219 21:21:04.920922 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:04 crc kubenswrapper[4886]: I0219 21:21:04.920932 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jwvl\" (UniqueName: \"kubernetes.io/projected/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-kube-api-access-5jwvl\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:04 crc kubenswrapper[4886]: I0219 21:21:04.920956 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:05 crc kubenswrapper[4886]: W0219 21:21:05.401834 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ad4df85_25c1_471c_8122_ef562a995a6a.slice/crio-a232a49960bc4e788c62e9018bf6cb5da159d78458bae45743529cf976f85223 WatchSource:0}: Error finding container a232a49960bc4e788c62e9018bf6cb5da159d78458bae45743529cf976f85223: Status 404 returned error can't find the container with id a232a49960bc4e788c62e9018bf6cb5da159d78458bae45743529cf976f85223 Feb 19 21:21:05 crc kubenswrapper[4886]: I0219 21:21:05.404098 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5db54896d6-4qqcq"] Feb 19 21:21:05 crc kubenswrapper[4886]: I0219 21:21:05.588096 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xmr6n" event={"ID":"83d1e69b-5951-43d6-a54b-c73956bb3356","Type":"ContainerStarted","Data":"88a5a03c714ace1b80a0c8509ab70d7db48e0352d2fe33a0e4bc9c01d5881333"} Feb 19 21:21:05 crc kubenswrapper[4886]: I0219 21:21:05.593087 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5db54896d6-4qqcq" event={"ID":"4ad4df85-25c1-471c-8122-ef562a995a6a","Type":"ContainerStarted","Data":"a232a49960bc4e788c62e9018bf6cb5da159d78458bae45743529cf976f85223"} Feb 19 21:21:05 crc kubenswrapper[4886]: I0219 21:21:05.595879 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38d7aae-1881-4142-a122-f3bdbaf8fbcb","Type":"ContainerStarted","Data":"2286ae9b097737c4a33cea8f51025a196899ac3bc0053aef6509f9f0352189b5"} Feb 19 21:21:05 crc kubenswrapper[4886]: I0219 21:21:05.609076 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-x27rt" event={"ID":"336b4fc8-890f-4ace-baa3-587ebc3b27db","Type":"ContainerStarted","Data":"1042a789ee012ef695b8251741b4bdb333e843ca37f2160fd731e71ae67f2ac2"} Feb 19 21:21:05 crc kubenswrapper[4886]: I0219 21:21:05.609119 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4rq5v" Feb 19 21:21:05 crc kubenswrapper[4886]: I0219 21:21:05.618444 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-xmr6n" podStartSLOduration=3.098335798 podStartE2EDuration="48.61842074s" podCreationTimestamp="2026-02-19 21:20:17 +0000 UTC" firstStartedPulling="2026-02-19 21:20:19.269497064 +0000 UTC m=+1249.897340114" lastFinishedPulling="2026-02-19 21:21:04.789582006 +0000 UTC m=+1295.417425056" observedRunningTime="2026-02-19 21:21:05.60789075 +0000 UTC m=+1296.235733800" watchObservedRunningTime="2026-02-19 21:21:05.61842074 +0000 UTC m=+1296.246263790" Feb 19 21:21:05 crc kubenswrapper[4886]: I0219 21:21:05.652821 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-x27rt" podStartSLOduration=3.461647268 podStartE2EDuration="49.652798569s" podCreationTimestamp="2026-02-19 21:20:16 +0000 UTC" firstStartedPulling="2026-02-19 21:20:18.601363727 +0000 UTC m=+1249.229206777" lastFinishedPulling="2026-02-19 21:21:04.792515028 +0000 UTC m=+1295.420358078" observedRunningTime="2026-02-19 21:21:05.638239959 +0000 UTC m=+1296.266083019" watchObservedRunningTime="2026-02-19 21:21:05.652798569 +0000 UTC m=+1296.280641639" Feb 19 21:21:05 crc kubenswrapper[4886]: I0219 21:21:05.864046 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-55ffff6f4b-98277"] Feb 19 21:21:05 crc kubenswrapper[4886]: E0219 21:21:05.864786 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7" containerName="placement-db-sync" Feb 19 21:21:05 crc kubenswrapper[4886]: I0219 21:21:05.864804 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7" containerName="placement-db-sync" Feb 19 21:21:05 crc kubenswrapper[4886]: I0219 21:21:05.865015 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7" containerName="placement-db-sync" Feb 19 21:21:05 crc kubenswrapper[4886]: I0219 21:21:05.866183 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:05 crc kubenswrapper[4886]: I0219 21:21:05.868243 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2vjq6" Feb 19 21:21:05 crc kubenswrapper[4886]: I0219 21:21:05.868464 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 21:21:05 crc kubenswrapper[4886]: I0219 21:21:05.868592 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 21:21:05 crc kubenswrapper[4886]: I0219 21:21:05.868707 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 19 21:21:05 crc kubenswrapper[4886]: I0219 21:21:05.869223 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 19 21:21:05 crc kubenswrapper[4886]: I0219 21:21:05.881957 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55ffff6f4b-98277"] Feb 19 21:21:05 crc kubenswrapper[4886]: I0219 21:21:05.948374 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e74c4223-0372-4fe3-9ea0-1bf2cd37adc8-combined-ca-bundle\") pod \"placement-55ffff6f4b-98277\" (UID: \"e74c4223-0372-4fe3-9ea0-1bf2cd37adc8\") " pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:05 crc kubenswrapper[4886]: I0219 21:21:05.948423 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cntfx\" (UniqueName: \"kubernetes.io/projected/e74c4223-0372-4fe3-9ea0-1bf2cd37adc8-kube-api-access-cntfx\") pod \"placement-55ffff6f4b-98277\" (UID: \"e74c4223-0372-4fe3-9ea0-1bf2cd37adc8\") " pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:05 crc kubenswrapper[4886]: I0219 21:21:05.948474 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e74c4223-0372-4fe3-9ea0-1bf2cd37adc8-scripts\") pod \"placement-55ffff6f4b-98277\" (UID: \"e74c4223-0372-4fe3-9ea0-1bf2cd37adc8\") " pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:05 crc kubenswrapper[4886]: I0219 21:21:05.948596 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e74c4223-0372-4fe3-9ea0-1bf2cd37adc8-public-tls-certs\") pod \"placement-55ffff6f4b-98277\" (UID: \"e74c4223-0372-4fe3-9ea0-1bf2cd37adc8\") " pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:05 crc kubenswrapper[4886]: I0219 21:21:05.948619 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e74c4223-0372-4fe3-9ea0-1bf2cd37adc8-internal-tls-certs\") pod \"placement-55ffff6f4b-98277\" (UID: \"e74c4223-0372-4fe3-9ea0-1bf2cd37adc8\") " pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:05 crc kubenswrapper[4886]: I0219 21:21:05.948659 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e74c4223-0372-4fe3-9ea0-1bf2cd37adc8-config-data\") pod \"placement-55ffff6f4b-98277\" (UID: \"e74c4223-0372-4fe3-9ea0-1bf2cd37adc8\") " pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:05 crc kubenswrapper[4886]: I0219 21:21:05.948699 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e74c4223-0372-4fe3-9ea0-1bf2cd37adc8-logs\") pod \"placement-55ffff6f4b-98277\" (UID: \"e74c4223-0372-4fe3-9ea0-1bf2cd37adc8\") " pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:06 crc kubenswrapper[4886]: I0219 21:21:06.051501 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e74c4223-0372-4fe3-9ea0-1bf2cd37adc8-scripts\") pod \"placement-55ffff6f4b-98277\" (UID: \"e74c4223-0372-4fe3-9ea0-1bf2cd37adc8\") " pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:06 crc kubenswrapper[4886]: I0219 21:21:06.051730 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e74c4223-0372-4fe3-9ea0-1bf2cd37adc8-public-tls-certs\") pod \"placement-55ffff6f4b-98277\" (UID: \"e74c4223-0372-4fe3-9ea0-1bf2cd37adc8\") " pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:06 crc kubenswrapper[4886]: I0219 21:21:06.051854 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e74c4223-0372-4fe3-9ea0-1bf2cd37adc8-internal-tls-certs\") pod \"placement-55ffff6f4b-98277\" (UID: \"e74c4223-0372-4fe3-9ea0-1bf2cd37adc8\") " pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:06 crc kubenswrapper[4886]: I0219 21:21:06.051911 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e74c4223-0372-4fe3-9ea0-1bf2cd37adc8-config-data\") pod \"placement-55ffff6f4b-98277\" (UID: \"e74c4223-0372-4fe3-9ea0-1bf2cd37adc8\") " pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:06 crc kubenswrapper[4886]: I0219 21:21:06.051963 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e74c4223-0372-4fe3-9ea0-1bf2cd37adc8-logs\") pod \"placement-55ffff6f4b-98277\" (UID: \"e74c4223-0372-4fe3-9ea0-1bf2cd37adc8\") " pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:06 crc kubenswrapper[4886]: I0219 21:21:06.052008 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e74c4223-0372-4fe3-9ea0-1bf2cd37adc8-combined-ca-bundle\") pod \"placement-55ffff6f4b-98277\" (UID: \"e74c4223-0372-4fe3-9ea0-1bf2cd37adc8\") " pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:06 crc kubenswrapper[4886]: I0219 21:21:06.052033 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cntfx\" (UniqueName: \"kubernetes.io/projected/e74c4223-0372-4fe3-9ea0-1bf2cd37adc8-kube-api-access-cntfx\") pod \"placement-55ffff6f4b-98277\" (UID: \"e74c4223-0372-4fe3-9ea0-1bf2cd37adc8\") " pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:06 crc kubenswrapper[4886]: I0219 21:21:06.052349 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e74c4223-0372-4fe3-9ea0-1bf2cd37adc8-logs\") pod \"placement-55ffff6f4b-98277\" (UID: \"e74c4223-0372-4fe3-9ea0-1bf2cd37adc8\") " pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:06 crc kubenswrapper[4886]: I0219 21:21:06.063233 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e74c4223-0372-4fe3-9ea0-1bf2cd37adc8-scripts\") pod \"placement-55ffff6f4b-98277\" (UID: \"e74c4223-0372-4fe3-9ea0-1bf2cd37adc8\") " pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:06 crc kubenswrapper[4886]: I0219 21:21:06.063630 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e74c4223-0372-4fe3-9ea0-1bf2cd37adc8-config-data\") pod \"placement-55ffff6f4b-98277\" (UID: \"e74c4223-0372-4fe3-9ea0-1bf2cd37adc8\") " pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:06 crc kubenswrapper[4886]: I0219 21:21:06.063837 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e74c4223-0372-4fe3-9ea0-1bf2cd37adc8-internal-tls-certs\") pod \"placement-55ffff6f4b-98277\" (UID: \"e74c4223-0372-4fe3-9ea0-1bf2cd37adc8\") " pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:06 crc kubenswrapper[4886]: I0219 21:21:06.064202 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e74c4223-0372-4fe3-9ea0-1bf2cd37adc8-public-tls-certs\") pod \"placement-55ffff6f4b-98277\" (UID: \"e74c4223-0372-4fe3-9ea0-1bf2cd37adc8\") " pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:06 crc kubenswrapper[4886]: I0219 21:21:06.066131 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e74c4223-0372-4fe3-9ea0-1bf2cd37adc8-combined-ca-bundle\") pod \"placement-55ffff6f4b-98277\" (UID: \"e74c4223-0372-4fe3-9ea0-1bf2cd37adc8\") " pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:06 crc kubenswrapper[4886]: I0219 21:21:06.073838 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cntfx\" (UniqueName: \"kubernetes.io/projected/e74c4223-0372-4fe3-9ea0-1bf2cd37adc8-kube-api-access-cntfx\") pod \"placement-55ffff6f4b-98277\" (UID: \"e74c4223-0372-4fe3-9ea0-1bf2cd37adc8\") " pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:06 crc kubenswrapper[4886]: I0219 21:21:06.182038 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:06 crc kubenswrapper[4886]: I0219 21:21:06.636633 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rlz62" event={"ID":"956c70ec-60b5-4909-b686-66971581b168","Type":"ContainerStarted","Data":"af9d7a7e495ccb7c28de240a71d66718e87158bb098489e6747b5fed3c18cc1f"} Feb 19 21:21:06 crc kubenswrapper[4886]: I0219 21:21:06.644044 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5db54896d6-4qqcq" event={"ID":"4ad4df85-25c1-471c-8122-ef562a995a6a","Type":"ContainerStarted","Data":"3416ab7d8d5e0ce895bdc3ac28717989840568b454d6b822327cc96b344f5b4a"} Feb 19 21:21:06 crc kubenswrapper[4886]: I0219 21:21:06.644317 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:21:06 crc kubenswrapper[4886]: I0219 21:21:06.673380 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-rlz62" podStartSLOduration=4.422506792 podStartE2EDuration="50.673355127s" podCreationTimestamp="2026-02-19 21:20:16 +0000 UTC" firstStartedPulling="2026-02-19 21:20:18.539601712 +0000 UTC m=+1249.167444762" lastFinishedPulling="2026-02-19 21:21:04.790450037 +0000 UTC m=+1295.418293097" observedRunningTime="2026-02-19 21:21:06.656214613 +0000 UTC m=+1297.284057663" watchObservedRunningTime="2026-02-19 21:21:06.673355127 +0000 UTC m=+1297.301198187" Feb 19 21:21:06 crc kubenswrapper[4886]: I0219 21:21:06.690795 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5db54896d6-4qqcq" podStartSLOduration=7.690772287 podStartE2EDuration="7.690772287s" podCreationTimestamp="2026-02-19 21:20:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:21:06.685128827 +0000 UTC m=+1297.312971877" watchObservedRunningTime="2026-02-19 21:21:06.690772287 +0000 UTC m=+1297.318615347" Feb 19 21:21:06 crc kubenswrapper[4886]: I0219 21:21:06.771568 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55ffff6f4b-98277"] Feb 19 21:21:06 crc kubenswrapper[4886]: W0219 21:21:06.785513 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode74c4223_0372_4fe3_9ea0_1bf2cd37adc8.slice/crio-65e320479cc941885a2365df760986096e0da380dacf4d3d3bf30330415e8908 WatchSource:0}: Error finding container 65e320479cc941885a2365df760986096e0da380dacf4d3d3bf30330415e8908: Status 404 returned error can't find the container with id 65e320479cc941885a2365df760986096e0da380dacf4d3d3bf30330415e8908 Feb 19 21:21:07 crc kubenswrapper[4886]: I0219 21:21:07.658810 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55ffff6f4b-98277" event={"ID":"e74c4223-0372-4fe3-9ea0-1bf2cd37adc8","Type":"ContainerStarted","Data":"c1d43e3c6aaea5278a68c86323cde16cc934256e07b97bedad92a0e159a02968"} Feb 19 21:21:07 crc kubenswrapper[4886]: I0219 21:21:07.659292 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55ffff6f4b-98277" event={"ID":"e74c4223-0372-4fe3-9ea0-1bf2cd37adc8","Type":"ContainerStarted","Data":"ae507ec7c69b8df636105d3fb83471581800f5b808965028d40b2b513a3b246b"} Feb 19 21:21:07 crc kubenswrapper[4886]: I0219 21:21:07.659309 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55ffff6f4b-98277" event={"ID":"e74c4223-0372-4fe3-9ea0-1bf2cd37adc8","Type":"ContainerStarted","Data":"65e320479cc941885a2365df760986096e0da380dacf4d3d3bf30330415e8908"} Feb 19 21:21:07 crc kubenswrapper[4886]: I0219 21:21:07.659513 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:07 crc kubenswrapper[4886]: I0219 21:21:07.696925 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-55ffff6f4b-98277" podStartSLOduration=2.696907718 podStartE2EDuration="2.696907718s" podCreationTimestamp="2026-02-19 21:21:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:21:07.681932678 +0000 UTC m=+1298.309775738" watchObservedRunningTime="2026-02-19 21:21:07.696907718 +0000 UTC m=+1298.324750768" Feb 19 21:21:08 crc kubenswrapper[4886]: I0219 21:21:08.673089 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:09 crc kubenswrapper[4886]: I0219 21:21:09.689162 4886 generic.go:334] "Generic (PLEG): container finished" podID="83d1e69b-5951-43d6-a54b-c73956bb3356" containerID="88a5a03c714ace1b80a0c8509ab70d7db48e0352d2fe33a0e4bc9c01d5881333" exitCode=0 Feb 19 21:21:09 crc kubenswrapper[4886]: I0219 21:21:09.689271 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xmr6n" event={"ID":"83d1e69b-5951-43d6-a54b-c73956bb3356","Type":"ContainerDied","Data":"88a5a03c714ace1b80a0c8509ab70d7db48e0352d2fe33a0e4bc9c01d5881333"} Feb 19 21:21:11 crc kubenswrapper[4886]: I0219 21:21:11.713279 4886 generic.go:334] "Generic (PLEG): container finished" podID="336b4fc8-890f-4ace-baa3-587ebc3b27db" containerID="1042a789ee012ef695b8251741b4bdb333e843ca37f2160fd731e71ae67f2ac2" exitCode=0 Feb 19 21:21:11 crc kubenswrapper[4886]: I0219 21:21:11.713541 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-x27rt" event={"ID":"336b4fc8-890f-4ace-baa3-587ebc3b27db","Type":"ContainerDied","Data":"1042a789ee012ef695b8251741b4bdb333e843ca37f2160fd731e71ae67f2ac2"} Feb 19 21:21:12 crc kubenswrapper[4886]: I0219 21:21:12.725370 4886 generic.go:334] "Generic (PLEG): container finished" podID="956c70ec-60b5-4909-b686-66971581b168" containerID="af9d7a7e495ccb7c28de240a71d66718e87158bb098489e6747b5fed3c18cc1f" exitCode=0 Feb 19 21:21:12 crc kubenswrapper[4886]: I0219 21:21:12.725439 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rlz62" event={"ID":"956c70ec-60b5-4909-b686-66971581b168","Type":"ContainerDied","Data":"af9d7a7e495ccb7c28de240a71d66718e87158bb098489e6747b5fed3c18cc1f"} Feb 19 21:21:13 crc kubenswrapper[4886]: I0219 21:21:13.017607 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xmr6n" Feb 19 21:21:13 crc kubenswrapper[4886]: I0219 21:21:13.134472 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgcch\" (UniqueName: \"kubernetes.io/projected/83d1e69b-5951-43d6-a54b-c73956bb3356-kube-api-access-kgcch\") pod \"83d1e69b-5951-43d6-a54b-c73956bb3356\" (UID: \"83d1e69b-5951-43d6-a54b-c73956bb3356\") " Feb 19 21:21:13 crc kubenswrapper[4886]: I0219 21:21:13.134813 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d1e69b-5951-43d6-a54b-c73956bb3356-combined-ca-bundle\") pod \"83d1e69b-5951-43d6-a54b-c73956bb3356\" (UID: \"83d1e69b-5951-43d6-a54b-c73956bb3356\") " Feb 19 21:21:13 crc kubenswrapper[4886]: I0219 21:21:13.134867 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/83d1e69b-5951-43d6-a54b-c73956bb3356-db-sync-config-data\") pod \"83d1e69b-5951-43d6-a54b-c73956bb3356\" (UID: \"83d1e69b-5951-43d6-a54b-c73956bb3356\") " Feb 19 21:21:13 crc kubenswrapper[4886]: I0219 21:21:13.140764 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83d1e69b-5951-43d6-a54b-c73956bb3356-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "83d1e69b-5951-43d6-a54b-c73956bb3356" (UID: "83d1e69b-5951-43d6-a54b-c73956bb3356"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:13 crc kubenswrapper[4886]: I0219 21:21:13.143412 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83d1e69b-5951-43d6-a54b-c73956bb3356-kube-api-access-kgcch" (OuterVolumeSpecName: "kube-api-access-kgcch") pod "83d1e69b-5951-43d6-a54b-c73956bb3356" (UID: "83d1e69b-5951-43d6-a54b-c73956bb3356"). InnerVolumeSpecName "kube-api-access-kgcch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:21:13 crc kubenswrapper[4886]: I0219 21:21:13.167556 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83d1e69b-5951-43d6-a54b-c73956bb3356-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83d1e69b-5951-43d6-a54b-c73956bb3356" (UID: "83d1e69b-5951-43d6-a54b-c73956bb3356"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:13 crc kubenswrapper[4886]: I0219 21:21:13.237393 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d1e69b-5951-43d6-a54b-c73956bb3356-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:13 crc kubenswrapper[4886]: I0219 21:21:13.237442 4886 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/83d1e69b-5951-43d6-a54b-c73956bb3356-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:13 crc kubenswrapper[4886]: I0219 21:21:13.237462 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgcch\" (UniqueName: \"kubernetes.io/projected/83d1e69b-5951-43d6-a54b-c73956bb3356-kube-api-access-kgcch\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:13 crc kubenswrapper[4886]: I0219 21:21:13.772048 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xmr6n" event={"ID":"83d1e69b-5951-43d6-a54b-c73956bb3356","Type":"ContainerDied","Data":"c396d20370fef6941299ce014881453eb1442244ff203ec95ed6751621e3e460"} Feb 19 21:21:13 crc kubenswrapper[4886]: I0219 21:21:13.772398 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c396d20370fef6941299ce014881453eb1442244ff203ec95ed6751621e3e460" Feb 19 21:21:13 crc kubenswrapper[4886]: I0219 21:21:13.772102 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xmr6n" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.369539 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-mdm8x"] Feb 19 21:21:14 crc kubenswrapper[4886]: E0219 21:21:14.370024 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d1e69b-5951-43d6-a54b-c73956bb3356" containerName="barbican-db-sync" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.370040 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d1e69b-5951-43d6-a54b-c73956bb3356" containerName="barbican-db-sync" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.370251 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="83d1e69b-5951-43d6-a54b-c73956bb3356" containerName="barbican-db-sync" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.371461 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-mdm8x" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.380484 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-dns-svc\") pod \"dnsmasq-dns-85ff748b95-mdm8x\" (UID: \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\") " pod="openstack/dnsmasq-dns-85ff748b95-mdm8x" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.380513 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-mdm8x\" (UID: \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\") " pod="openstack/dnsmasq-dns-85ff748b95-mdm8x" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.380559 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lsrc\" (UniqueName: \"kubernetes.io/projected/286b0cdb-3a0d-4de1-a288-ccf9494130ef-kube-api-access-4lsrc\") pod \"dnsmasq-dns-85ff748b95-mdm8x\" (UID: \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\") " pod="openstack/dnsmasq-dns-85ff748b95-mdm8x" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.380612 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-config\") pod \"dnsmasq-dns-85ff748b95-mdm8x\" (UID: \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\") " pod="openstack/dnsmasq-dns-85ff748b95-mdm8x" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.380654 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-mdm8x\" (UID: \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\") " pod="openstack/dnsmasq-dns-85ff748b95-mdm8x" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.380803 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-mdm8x\" (UID: \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\") " pod="openstack/dnsmasq-dns-85ff748b95-mdm8x" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.402837 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-mdm8x"] Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.424478 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7dd5c64c6d-ggxlj"] Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.426553 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7dd5c64c6d-ggxlj" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.438736 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.438987 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-bdpd4" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.442407 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.452570 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7bdf9bd989-tvlm8"] Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.454802 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7bdf9bd989-tvlm8" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.469378 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.469542 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7dd5c64c6d-ggxlj"] Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.482242 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7bdf9bd989-tvlm8"] Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.500159 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3273c1f5-d854-4ec8-acbd-76a580892999-config-data\") pod \"barbican-worker-7bdf9bd989-tvlm8\" (UID: \"3273c1f5-d854-4ec8-acbd-76a580892999\") " pod="openstack/barbican-worker-7bdf9bd989-tvlm8" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.500221 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa52fc6-a58f-4b63-bb6d-eed86607b807-combined-ca-bundle\") pod \"barbican-keystone-listener-7dd5c64c6d-ggxlj\" (UID: \"3aa52fc6-a58f-4b63-bb6d-eed86607b807\") " pod="openstack/barbican-keystone-listener-7dd5c64c6d-ggxlj" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.500245 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-mdm8x\" (UID: \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\") " pod="openstack/dnsmasq-dns-85ff748b95-mdm8x" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.500309 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3273c1f5-d854-4ec8-acbd-76a580892999-combined-ca-bundle\") pod \"barbican-worker-7bdf9bd989-tvlm8\" (UID: \"3273c1f5-d854-4ec8-acbd-76a580892999\") " pod="openstack/barbican-worker-7bdf9bd989-tvlm8" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.500428 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa52fc6-a58f-4b63-bb6d-eed86607b807-config-data\") pod \"barbican-keystone-listener-7dd5c64c6d-ggxlj\" (UID: \"3aa52fc6-a58f-4b63-bb6d-eed86607b807\") " pod="openstack/barbican-keystone-listener-7dd5c64c6d-ggxlj" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.500451 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3273c1f5-d854-4ec8-acbd-76a580892999-logs\") pod \"barbican-worker-7bdf9bd989-tvlm8\" (UID: \"3273c1f5-d854-4ec8-acbd-76a580892999\") " pod="openstack/barbican-worker-7bdf9bd989-tvlm8" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.500649 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3273c1f5-d854-4ec8-acbd-76a580892999-config-data-custom\") pod \"barbican-worker-7bdf9bd989-tvlm8\" (UID: \"3273c1f5-d854-4ec8-acbd-76a580892999\") " pod="openstack/barbican-worker-7bdf9bd989-tvlm8" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.500694 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-mdm8x\" (UID: \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\") " pod="openstack/dnsmasq-dns-85ff748b95-mdm8x" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.500796 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g8jt\" (UniqueName: \"kubernetes.io/projected/3273c1f5-d854-4ec8-acbd-76a580892999-kube-api-access-9g8jt\") pod \"barbican-worker-7bdf9bd989-tvlm8\" (UID: \"3273c1f5-d854-4ec8-acbd-76a580892999\") " pod="openstack/barbican-worker-7bdf9bd989-tvlm8" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.500847 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aa52fc6-a58f-4b63-bb6d-eed86607b807-logs\") pod \"barbican-keystone-listener-7dd5c64c6d-ggxlj\" (UID: \"3aa52fc6-a58f-4b63-bb6d-eed86607b807\") " pod="openstack/barbican-keystone-listener-7dd5c64c6d-ggxlj" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.500874 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-dns-svc\") pod \"dnsmasq-dns-85ff748b95-mdm8x\" (UID: \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\") " pod="openstack/dnsmasq-dns-85ff748b95-mdm8x" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.500893 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-mdm8x\" (UID: \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\") " pod="openstack/dnsmasq-dns-85ff748b95-mdm8x" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.500921 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3aa52fc6-a58f-4b63-bb6d-eed86607b807-config-data-custom\") pod \"barbican-keystone-listener-7dd5c64c6d-ggxlj\" (UID: \"3aa52fc6-a58f-4b63-bb6d-eed86607b807\") " pod="openstack/barbican-keystone-listener-7dd5c64c6d-ggxlj" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.500941 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm28p\" (UniqueName: \"kubernetes.io/projected/3aa52fc6-a58f-4b63-bb6d-eed86607b807-kube-api-access-nm28p\") pod \"barbican-keystone-listener-7dd5c64c6d-ggxlj\" (UID: \"3aa52fc6-a58f-4b63-bb6d-eed86607b807\") " pod="openstack/barbican-keystone-listener-7dd5c64c6d-ggxlj" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.500989 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lsrc\" (UniqueName: \"kubernetes.io/projected/286b0cdb-3a0d-4de1-a288-ccf9494130ef-kube-api-access-4lsrc\") pod \"dnsmasq-dns-85ff748b95-mdm8x\" (UID: \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\") " pod="openstack/dnsmasq-dns-85ff748b95-mdm8x" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.501095 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-config\") pod \"dnsmasq-dns-85ff748b95-mdm8x\" (UID: \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\") " pod="openstack/dnsmasq-dns-85ff748b95-mdm8x" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.501563 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-mdm8x\" (UID: \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\") " pod="openstack/dnsmasq-dns-85ff748b95-mdm8x" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.507309 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-config\") pod \"dnsmasq-dns-85ff748b95-mdm8x\" (UID: \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\") " pod="openstack/dnsmasq-dns-85ff748b95-mdm8x" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.508083 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-mdm8x\" (UID: \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\") " pod="openstack/dnsmasq-dns-85ff748b95-mdm8x" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.508156 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-dns-svc\") pod \"dnsmasq-dns-85ff748b95-mdm8x\" (UID: \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\") " pod="openstack/dnsmasq-dns-85ff748b95-mdm8x" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.508966 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-mdm8x\" (UID: \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\") " pod="openstack/dnsmasq-dns-85ff748b95-mdm8x" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.564858 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lsrc\" (UniqueName: \"kubernetes.io/projected/286b0cdb-3a0d-4de1-a288-ccf9494130ef-kube-api-access-4lsrc\") pod \"dnsmasq-dns-85ff748b95-mdm8x\" (UID: \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\") " pod="openstack/dnsmasq-dns-85ff748b95-mdm8x" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.602943 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3aa52fc6-a58f-4b63-bb6d-eed86607b807-config-data-custom\") pod \"barbican-keystone-listener-7dd5c64c6d-ggxlj\" (UID: \"3aa52fc6-a58f-4b63-bb6d-eed86607b807\") " pod="openstack/barbican-keystone-listener-7dd5c64c6d-ggxlj" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.602979 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm28p\" (UniqueName: \"kubernetes.io/projected/3aa52fc6-a58f-4b63-bb6d-eed86607b807-kube-api-access-nm28p\") pod \"barbican-keystone-listener-7dd5c64c6d-ggxlj\" (UID: \"3aa52fc6-a58f-4b63-bb6d-eed86607b807\") " pod="openstack/barbican-keystone-listener-7dd5c64c6d-ggxlj" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.603053 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3273c1f5-d854-4ec8-acbd-76a580892999-config-data\") pod \"barbican-worker-7bdf9bd989-tvlm8\" (UID: \"3273c1f5-d854-4ec8-acbd-76a580892999\") " pod="openstack/barbican-worker-7bdf9bd989-tvlm8" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.603074 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa52fc6-a58f-4b63-bb6d-eed86607b807-combined-ca-bundle\") pod \"barbican-keystone-listener-7dd5c64c6d-ggxlj\" (UID: \"3aa52fc6-a58f-4b63-bb6d-eed86607b807\") " pod="openstack/barbican-keystone-listener-7dd5c64c6d-ggxlj" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.603102 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3273c1f5-d854-4ec8-acbd-76a580892999-combined-ca-bundle\") pod \"barbican-worker-7bdf9bd989-tvlm8\" (UID: \"3273c1f5-d854-4ec8-acbd-76a580892999\") " pod="openstack/barbican-worker-7bdf9bd989-tvlm8" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.603164 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa52fc6-a58f-4b63-bb6d-eed86607b807-config-data\") pod \"barbican-keystone-listener-7dd5c64c6d-ggxlj\" (UID: \"3aa52fc6-a58f-4b63-bb6d-eed86607b807\") " pod="openstack/barbican-keystone-listener-7dd5c64c6d-ggxlj" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.603184 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3273c1f5-d854-4ec8-acbd-76a580892999-logs\") pod \"barbican-worker-7bdf9bd989-tvlm8\" (UID: \"3273c1f5-d854-4ec8-acbd-76a580892999\") " pod="openstack/barbican-worker-7bdf9bd989-tvlm8" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.603209 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3273c1f5-d854-4ec8-acbd-76a580892999-config-data-custom\") pod \"barbican-worker-7bdf9bd989-tvlm8\" (UID: \"3273c1f5-d854-4ec8-acbd-76a580892999\") " pod="openstack/barbican-worker-7bdf9bd989-tvlm8" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.603262 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g8jt\" (UniqueName: \"kubernetes.io/projected/3273c1f5-d854-4ec8-acbd-76a580892999-kube-api-access-9g8jt\") pod \"barbican-worker-7bdf9bd989-tvlm8\" (UID: \"3273c1f5-d854-4ec8-acbd-76a580892999\") " pod="openstack/barbican-worker-7bdf9bd989-tvlm8" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.603297 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aa52fc6-a58f-4b63-bb6d-eed86607b807-logs\") pod \"barbican-keystone-listener-7dd5c64c6d-ggxlj\" (UID: \"3aa52fc6-a58f-4b63-bb6d-eed86607b807\") " pod="openstack/barbican-keystone-listener-7dd5c64c6d-ggxlj" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.603752 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aa52fc6-a58f-4b63-bb6d-eed86607b807-logs\") pod \"barbican-keystone-listener-7dd5c64c6d-ggxlj\" (UID: \"3aa52fc6-a58f-4b63-bb6d-eed86607b807\") " pod="openstack/barbican-keystone-listener-7dd5c64c6d-ggxlj" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.604809 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3273c1f5-d854-4ec8-acbd-76a580892999-logs\") pod \"barbican-worker-7bdf9bd989-tvlm8\" (UID: \"3273c1f5-d854-4ec8-acbd-76a580892999\") " pod="openstack/barbican-worker-7bdf9bd989-tvlm8" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.611842 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3273c1f5-d854-4ec8-acbd-76a580892999-combined-ca-bundle\") pod \"barbican-worker-7bdf9bd989-tvlm8\" (UID: \"3273c1f5-d854-4ec8-acbd-76a580892999\") " pod="openstack/barbican-worker-7bdf9bd989-tvlm8" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.612544 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3273c1f5-d854-4ec8-acbd-76a580892999-config-data-custom\") pod \"barbican-worker-7bdf9bd989-tvlm8\" (UID: \"3273c1f5-d854-4ec8-acbd-76a580892999\") " pod="openstack/barbican-worker-7bdf9bd989-tvlm8" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.617242 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa52fc6-a58f-4b63-bb6d-eed86607b807-combined-ca-bundle\") pod \"barbican-keystone-listener-7dd5c64c6d-ggxlj\" (UID: \"3aa52fc6-a58f-4b63-bb6d-eed86607b807\") " pod="openstack/barbican-keystone-listener-7dd5c64c6d-ggxlj" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.627133 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm28p\" (UniqueName: \"kubernetes.io/projected/3aa52fc6-a58f-4b63-bb6d-eed86607b807-kube-api-access-nm28p\") pod \"barbican-keystone-listener-7dd5c64c6d-ggxlj\" (UID: \"3aa52fc6-a58f-4b63-bb6d-eed86607b807\") " pod="openstack/barbican-keystone-listener-7dd5c64c6d-ggxlj" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.629574 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3aa52fc6-a58f-4b63-bb6d-eed86607b807-config-data-custom\") pod \"barbican-keystone-listener-7dd5c64c6d-ggxlj\" (UID: \"3aa52fc6-a58f-4b63-bb6d-eed86607b807\") " pod="openstack/barbican-keystone-listener-7dd5c64c6d-ggxlj" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.635924 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa52fc6-a58f-4b63-bb6d-eed86607b807-config-data\") pod \"barbican-keystone-listener-7dd5c64c6d-ggxlj\" (UID: \"3aa52fc6-a58f-4b63-bb6d-eed86607b807\") " pod="openstack/barbican-keystone-listener-7dd5c64c6d-ggxlj" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.677389 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3273c1f5-d854-4ec8-acbd-76a580892999-config-data\") pod \"barbican-worker-7bdf9bd989-tvlm8\" (UID: \"3273c1f5-d854-4ec8-acbd-76a580892999\") " pod="openstack/barbican-worker-7bdf9bd989-tvlm8" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.683834 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g8jt\" (UniqueName: \"kubernetes.io/projected/3273c1f5-d854-4ec8-acbd-76a580892999-kube-api-access-9g8jt\") pod \"barbican-worker-7bdf9bd989-tvlm8\" (UID: \"3273c1f5-d854-4ec8-acbd-76a580892999\") " pod="openstack/barbican-worker-7bdf9bd989-tvlm8" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.705254 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-mdm8x" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.761879 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7dd5c64c6d-ggxlj" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.816359 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7bdf9bd989-tvlm8" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.886101 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6787dcd584-7jqgr"] Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.888135 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6787dcd584-7jqgr" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.907652 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6787dcd584-7jqgr"] Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.911257 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.934158 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8f4be1-2248-46ee-a97e-1121038c5fd8-combined-ca-bundle\") pod \"barbican-api-6787dcd584-7jqgr\" (UID: \"df8f4be1-2248-46ee-a97e-1121038c5fd8\") " pod="openstack/barbican-api-6787dcd584-7jqgr" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.934208 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df8f4be1-2248-46ee-a97e-1121038c5fd8-config-data-custom\") pod \"barbican-api-6787dcd584-7jqgr\" (UID: \"df8f4be1-2248-46ee-a97e-1121038c5fd8\") " pod="openstack/barbican-api-6787dcd584-7jqgr" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.934282 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnjnq\" (UniqueName: \"kubernetes.io/projected/df8f4be1-2248-46ee-a97e-1121038c5fd8-kube-api-access-wnjnq\") pod \"barbican-api-6787dcd584-7jqgr\" (UID: \"df8f4be1-2248-46ee-a97e-1121038c5fd8\") " pod="openstack/barbican-api-6787dcd584-7jqgr" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.934383 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df8f4be1-2248-46ee-a97e-1121038c5fd8-config-data\") pod \"barbican-api-6787dcd584-7jqgr\" (UID: \"df8f4be1-2248-46ee-a97e-1121038c5fd8\") " pod="openstack/barbican-api-6787dcd584-7jqgr" Feb 19 21:21:14 crc kubenswrapper[4886]: I0219 21:21:14.934764 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df8f4be1-2248-46ee-a97e-1121038c5fd8-logs\") pod \"barbican-api-6787dcd584-7jqgr\" (UID: \"df8f4be1-2248-46ee-a97e-1121038c5fd8\") " pod="openstack/barbican-api-6787dcd584-7jqgr" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.037484 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df8f4be1-2248-46ee-a97e-1121038c5fd8-logs\") pod \"barbican-api-6787dcd584-7jqgr\" (UID: \"df8f4be1-2248-46ee-a97e-1121038c5fd8\") " pod="openstack/barbican-api-6787dcd584-7jqgr" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.037562 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8f4be1-2248-46ee-a97e-1121038c5fd8-combined-ca-bundle\") pod \"barbican-api-6787dcd584-7jqgr\" (UID: \"df8f4be1-2248-46ee-a97e-1121038c5fd8\") " pod="openstack/barbican-api-6787dcd584-7jqgr" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.037594 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df8f4be1-2248-46ee-a97e-1121038c5fd8-config-data-custom\") pod \"barbican-api-6787dcd584-7jqgr\" (UID: \"df8f4be1-2248-46ee-a97e-1121038c5fd8\") " pod="openstack/barbican-api-6787dcd584-7jqgr" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.037649 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnjnq\" (UniqueName: \"kubernetes.io/projected/df8f4be1-2248-46ee-a97e-1121038c5fd8-kube-api-access-wnjnq\") pod \"barbican-api-6787dcd584-7jqgr\" (UID: \"df8f4be1-2248-46ee-a97e-1121038c5fd8\") " pod="openstack/barbican-api-6787dcd584-7jqgr" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.037689 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df8f4be1-2248-46ee-a97e-1121038c5fd8-config-data\") pod \"barbican-api-6787dcd584-7jqgr\" (UID: \"df8f4be1-2248-46ee-a97e-1121038c5fd8\") " pod="openstack/barbican-api-6787dcd584-7jqgr" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.037989 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df8f4be1-2248-46ee-a97e-1121038c5fd8-logs\") pod \"barbican-api-6787dcd584-7jqgr\" (UID: \"df8f4be1-2248-46ee-a97e-1121038c5fd8\") " pod="openstack/barbican-api-6787dcd584-7jqgr" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.042379 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df8f4be1-2248-46ee-a97e-1121038c5fd8-config-data-custom\") pod \"barbican-api-6787dcd584-7jqgr\" (UID: \"df8f4be1-2248-46ee-a97e-1121038c5fd8\") " pod="openstack/barbican-api-6787dcd584-7jqgr" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.043275 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df8f4be1-2248-46ee-a97e-1121038c5fd8-config-data\") pod \"barbican-api-6787dcd584-7jqgr\" (UID: \"df8f4be1-2248-46ee-a97e-1121038c5fd8\") " pod="openstack/barbican-api-6787dcd584-7jqgr" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.055813 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnjnq\" (UniqueName: \"kubernetes.io/projected/df8f4be1-2248-46ee-a97e-1121038c5fd8-kube-api-access-wnjnq\") pod \"barbican-api-6787dcd584-7jqgr\" (UID: \"df8f4be1-2248-46ee-a97e-1121038c5fd8\") " pod="openstack/barbican-api-6787dcd584-7jqgr" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.064091 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8f4be1-2248-46ee-a97e-1121038c5fd8-combined-ca-bundle\") pod \"barbican-api-6787dcd584-7jqgr\" (UID: \"df8f4be1-2248-46ee-a97e-1121038c5fd8\") " pod="openstack/barbican-api-6787dcd584-7jqgr" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.276499 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6787dcd584-7jqgr" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.692995 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rlz62" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.752064 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcf8p\" (UniqueName: \"kubernetes.io/projected/956c70ec-60b5-4909-b686-66971581b168-kube-api-access-hcf8p\") pod \"956c70ec-60b5-4909-b686-66971581b168\" (UID: \"956c70ec-60b5-4909-b686-66971581b168\") " Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.752248 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/956c70ec-60b5-4909-b686-66971581b168-scripts\") pod \"956c70ec-60b5-4909-b686-66971581b168\" (UID: \"956c70ec-60b5-4909-b686-66971581b168\") " Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.752324 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/956c70ec-60b5-4909-b686-66971581b168-config-data\") pod \"956c70ec-60b5-4909-b686-66971581b168\" (UID: \"956c70ec-60b5-4909-b686-66971581b168\") " Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.752425 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/956c70ec-60b5-4909-b686-66971581b168-etc-machine-id\") pod \"956c70ec-60b5-4909-b686-66971581b168\" (UID: \"956c70ec-60b5-4909-b686-66971581b168\") " Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.752484 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/956c70ec-60b5-4909-b686-66971581b168-combined-ca-bundle\") pod \"956c70ec-60b5-4909-b686-66971581b168\" (UID: \"956c70ec-60b5-4909-b686-66971581b168\") " Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.752805 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/956c70ec-60b5-4909-b686-66971581b168-db-sync-config-data\") pod \"956c70ec-60b5-4909-b686-66971581b168\" (UID: \"956c70ec-60b5-4909-b686-66971581b168\") " Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.755655 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/956c70ec-60b5-4909-b686-66971581b168-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "956c70ec-60b5-4909-b686-66971581b168" (UID: "956c70ec-60b5-4909-b686-66971581b168"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.761189 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/956c70ec-60b5-4909-b686-66971581b168-scripts" (OuterVolumeSpecName: "scripts") pod "956c70ec-60b5-4909-b686-66971581b168" (UID: "956c70ec-60b5-4909-b686-66971581b168"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.766759 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/956c70ec-60b5-4909-b686-66971581b168-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "956c70ec-60b5-4909-b686-66971581b168" (UID: "956c70ec-60b5-4909-b686-66971581b168"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.767126 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/956c70ec-60b5-4909-b686-66971581b168-kube-api-access-hcf8p" (OuterVolumeSpecName: "kube-api-access-hcf8p") pod "956c70ec-60b5-4909-b686-66971581b168" (UID: "956c70ec-60b5-4909-b686-66971581b168"). InnerVolumeSpecName "kube-api-access-hcf8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.795291 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/956c70ec-60b5-4909-b686-66971581b168-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "956c70ec-60b5-4909-b686-66971581b168" (UID: "956c70ec-60b5-4909-b686-66971581b168"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.797571 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rlz62" event={"ID":"956c70ec-60b5-4909-b686-66971581b168","Type":"ContainerDied","Data":"7544c266f1e7db7ad0aee365345495aba93e569fa49611ea88e9f823b2d49b27"} Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.797610 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7544c266f1e7db7ad0aee365345495aba93e569fa49611ea88e9f823b2d49b27" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.797662 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rlz62" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.842663 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/956c70ec-60b5-4909-b686-66971581b168-config-data" (OuterVolumeSpecName: "config-data") pod "956c70ec-60b5-4909-b686-66971581b168" (UID: "956c70ec-60b5-4909-b686-66971581b168"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.857738 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/956c70ec-60b5-4909-b686-66971581b168-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.857771 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/956c70ec-60b5-4909-b686-66971581b168-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.857781 4886 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/956c70ec-60b5-4909-b686-66971581b168-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.857791 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/956c70ec-60b5-4909-b686-66971581b168-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.857799 4886 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/956c70ec-60b5-4909-b686-66971581b168-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:15 crc kubenswrapper[4886]: I0219 21:21:15.857810 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcf8p\" (UniqueName: \"kubernetes.io/projected/956c70ec-60b5-4909-b686-66971581b168-kube-api-access-hcf8p\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.014333 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 21:21:17 crc kubenswrapper[4886]: E0219 21:21:17.015140 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="956c70ec-60b5-4909-b686-66971581b168" containerName="cinder-db-sync" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.015153 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="956c70ec-60b5-4909-b686-66971581b168" containerName="cinder-db-sync" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.015369 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="956c70ec-60b5-4909-b686-66971581b168" containerName="cinder-db-sync" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.016471 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.020823 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.021108 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.021296 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.021435 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rsk69" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.040125 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.083950 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.084032 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-config-data\") pod \"cinder-scheduler-0\" (UID: \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.084109 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-scripts\") pod \"cinder-scheduler-0\" (UID: \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.084160 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.084194 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.084224 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw9jb\" (UniqueName: \"kubernetes.io/projected/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-kube-api-access-zw9jb\") pod \"cinder-scheduler-0\" (UID: \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.114008 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-mdm8x"] Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.149167 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-wkgk6"] Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.151004 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.164066 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-wkgk6"] Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.185560 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-config-data\") pod \"cinder-scheduler-0\" (UID: \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.185826 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-wkgk6\" (UID: \"40edad44-68bc-4fbc-9ceb-881436065e53\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.185937 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-config\") pod \"dnsmasq-dns-5c9776ccc5-wkgk6\" (UID: \"40edad44-68bc-4fbc-9ceb-881436065e53\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.186032 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-scripts\") pod \"cinder-scheduler-0\" (UID: \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.186134 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2snj5\" (UniqueName: \"kubernetes.io/projected/40edad44-68bc-4fbc-9ceb-881436065e53-kube-api-access-2snj5\") pod \"dnsmasq-dns-5c9776ccc5-wkgk6\" (UID: \"40edad44-68bc-4fbc-9ceb-881436065e53\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.186253 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-wkgk6\" (UID: \"40edad44-68bc-4fbc-9ceb-881436065e53\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.186353 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.186453 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-wkgk6\" (UID: \"40edad44-68bc-4fbc-9ceb-881436065e53\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.186544 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.186626 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw9jb\" (UniqueName: \"kubernetes.io/projected/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-kube-api-access-zw9jb\") pod \"cinder-scheduler-0\" (UID: \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.190492 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.190653 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-wkgk6\" (UID: \"40edad44-68bc-4fbc-9ceb-881436065e53\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.192283 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.200065 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.200431 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-config-data\") pod \"cinder-scheduler-0\" (UID: \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.201089 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-scripts\") pod \"cinder-scheduler-0\" (UID: \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.202400 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.234617 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw9jb\" (UniqueName: \"kubernetes.io/projected/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-kube-api-access-zw9jb\") pod \"cinder-scheduler-0\" (UID: \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.296006 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-wkgk6\" (UID: \"40edad44-68bc-4fbc-9ceb-881436065e53\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.296093 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-config\") pod \"dnsmasq-dns-5c9776ccc5-wkgk6\" (UID: \"40edad44-68bc-4fbc-9ceb-881436065e53\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.296184 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2snj5\" (UniqueName: \"kubernetes.io/projected/40edad44-68bc-4fbc-9ceb-881436065e53-kube-api-access-2snj5\") pod \"dnsmasq-dns-5c9776ccc5-wkgk6\" (UID: \"40edad44-68bc-4fbc-9ceb-881436065e53\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.296214 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-wkgk6\" (UID: \"40edad44-68bc-4fbc-9ceb-881436065e53\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.296245 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-wkgk6\" (UID: \"40edad44-68bc-4fbc-9ceb-881436065e53\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.296356 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-wkgk6\" (UID: \"40edad44-68bc-4fbc-9ceb-881436065e53\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.297038 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-wkgk6\" (UID: \"40edad44-68bc-4fbc-9ceb-881436065e53\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.297088 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-wkgk6\" (UID: \"40edad44-68bc-4fbc-9ceb-881436065e53\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.297889 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-config\") pod \"dnsmasq-dns-5c9776ccc5-wkgk6\" (UID: \"40edad44-68bc-4fbc-9ceb-881436065e53\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.298406 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-wkgk6\" (UID: \"40edad44-68bc-4fbc-9ceb-881436065e53\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.300208 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-wkgk6\" (UID: \"40edad44-68bc-4fbc-9ceb-881436065e53\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.351945 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2snj5\" (UniqueName: \"kubernetes.io/projected/40edad44-68bc-4fbc-9ceb-881436065e53-kube-api-access-2snj5\") pod \"dnsmasq-dns-5c9776ccc5-wkgk6\" (UID: \"40edad44-68bc-4fbc-9ceb-881436065e53\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.361141 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.390067 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-58f7d46f6b-95bx9" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.476105 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.478227 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.483573 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.489585 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.603108 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " pod="openstack/cinder-api-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.603161 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-logs\") pod \"cinder-api-0\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " pod="openstack/cinder-api-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.603191 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-config-data\") pod \"cinder-api-0\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " pod="openstack/cinder-api-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.603208 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " pod="openstack/cinder-api-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.603272 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmkf4\" (UniqueName: \"kubernetes.io/projected/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-kube-api-access-mmkf4\") pod \"cinder-api-0\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " pod="openstack/cinder-api-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.603312 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-scripts\") pod \"cinder-api-0\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " pod="openstack/cinder-api-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.603355 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-config-data-custom\") pod \"cinder-api-0\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " pod="openstack/cinder-api-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.621014 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.700608 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-597bbcffd5-5t75q"] Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.704919 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-597bbcffd5-5t75q" podUID="80cfc748-e684-4b36-8c1c-a22a1ebaf84c" containerName="neutron-api" containerID="cri-o://2982cf6a6d9667622f1d05f11777ad97a290cdc543edecd5ccdd02b07381881f" gracePeriod=30 Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.704953 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-config-data-custom\") pod \"cinder-api-0\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " pod="openstack/cinder-api-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.705073 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-597bbcffd5-5t75q" podUID="80cfc748-e684-4b36-8c1c-a22a1ebaf84c" containerName="neutron-httpd" containerID="cri-o://717ec4a9a5e32c5a2f84635678a193f6baadcec7644464e6d2fca315a33a67c5" gracePeriod=30 Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.705723 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " pod="openstack/cinder-api-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.705842 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " pod="openstack/cinder-api-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.706481 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-logs\") pod \"cinder-api-0\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " pod="openstack/cinder-api-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.709168 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-config-data\") pod \"cinder-api-0\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " pod="openstack/cinder-api-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.709203 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " pod="openstack/cinder-api-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.709370 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmkf4\" (UniqueName: \"kubernetes.io/projected/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-kube-api-access-mmkf4\") pod \"cinder-api-0\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " pod="openstack/cinder-api-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.709488 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-scripts\") pod \"cinder-api-0\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " pod="openstack/cinder-api-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.710828 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-config-data-custom\") pod \"cinder-api-0\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " pod="openstack/cinder-api-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.707590 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-logs\") pod \"cinder-api-0\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " pod="openstack/cinder-api-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.718320 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-scripts\") pod \"cinder-api-0\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " pod="openstack/cinder-api-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.719115 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-config-data\") pod \"cinder-api-0\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " pod="openstack/cinder-api-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.726622 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " pod="openstack/cinder-api-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.730144 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-597bbcffd5-5t75q" podUID="80cfc748-e684-4b36-8c1c-a22a1ebaf84c" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.194:9696/\": EOF" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.737764 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmkf4\" (UniqueName: \"kubernetes.io/projected/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-kube-api-access-mmkf4\") pod \"cinder-api-0\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " pod="openstack/cinder-api-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.745429 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-86c8d64dfc-bltj4"] Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.747558 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86c8d64dfc-bltj4" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.758966 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86c8d64dfc-bltj4"] Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.797457 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.814339 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d-combined-ca-bundle\") pod \"neutron-86c8d64dfc-bltj4\" (UID: \"2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d\") " pod="openstack/neutron-86c8d64dfc-bltj4" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.814392 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d-ovndb-tls-certs\") pod \"neutron-86c8d64dfc-bltj4\" (UID: \"2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d\") " pod="openstack/neutron-86c8d64dfc-bltj4" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.814416 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkzfl\" (UniqueName: \"kubernetes.io/projected/2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d-kube-api-access-kkzfl\") pod \"neutron-86c8d64dfc-bltj4\" (UID: \"2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d\") " pod="openstack/neutron-86c8d64dfc-bltj4" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.814479 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d-public-tls-certs\") pod \"neutron-86c8d64dfc-bltj4\" (UID: \"2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d\") " pod="openstack/neutron-86c8d64dfc-bltj4" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.814630 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d-config\") pod \"neutron-86c8d64dfc-bltj4\" (UID: \"2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d\") " pod="openstack/neutron-86c8d64dfc-bltj4" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.814677 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d-internal-tls-certs\") pod \"neutron-86c8d64dfc-bltj4\" (UID: \"2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d\") " pod="openstack/neutron-86c8d64dfc-bltj4" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.815017 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d-httpd-config\") pod \"neutron-86c8d64dfc-bltj4\" (UID: \"2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d\") " pod="openstack/neutron-86c8d64dfc-bltj4" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.892579 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6669c66696-lw9qb"] Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.894234 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.896387 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.898994 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.912141 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6669c66696-lw9qb"] Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.917274 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d-config\") pod \"neutron-86c8d64dfc-bltj4\" (UID: \"2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d\") " pod="openstack/neutron-86c8d64dfc-bltj4" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.917333 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d-internal-tls-certs\") pod \"neutron-86c8d64dfc-bltj4\" (UID: \"2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d\") " pod="openstack/neutron-86c8d64dfc-bltj4" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.917444 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d-httpd-config\") pod \"neutron-86c8d64dfc-bltj4\" (UID: \"2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d\") " pod="openstack/neutron-86c8d64dfc-bltj4" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.917480 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d-combined-ca-bundle\") pod \"neutron-86c8d64dfc-bltj4\" (UID: \"2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d\") " pod="openstack/neutron-86c8d64dfc-bltj4" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.917510 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d-ovndb-tls-certs\") pod \"neutron-86c8d64dfc-bltj4\" (UID: \"2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d\") " pod="openstack/neutron-86c8d64dfc-bltj4" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.917528 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkzfl\" (UniqueName: \"kubernetes.io/projected/2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d-kube-api-access-kkzfl\") pod \"neutron-86c8d64dfc-bltj4\" (UID: \"2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d\") " pod="openstack/neutron-86c8d64dfc-bltj4" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.917582 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d-public-tls-certs\") pod \"neutron-86c8d64dfc-bltj4\" (UID: \"2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d\") " pod="openstack/neutron-86c8d64dfc-bltj4" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.920654 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d-public-tls-certs\") pod \"neutron-86c8d64dfc-bltj4\" (UID: \"2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d\") " pod="openstack/neutron-86c8d64dfc-bltj4" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.923920 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d-internal-tls-certs\") pod \"neutron-86c8d64dfc-bltj4\" (UID: \"2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d\") " pod="openstack/neutron-86c8d64dfc-bltj4" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.924277 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d-config\") pod \"neutron-86c8d64dfc-bltj4\" (UID: \"2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d\") " pod="openstack/neutron-86c8d64dfc-bltj4" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.932980 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d-httpd-config\") pod \"neutron-86c8d64dfc-bltj4\" (UID: \"2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d\") " pod="openstack/neutron-86c8d64dfc-bltj4" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.933286 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d-combined-ca-bundle\") pod \"neutron-86c8d64dfc-bltj4\" (UID: \"2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d\") " pod="openstack/neutron-86c8d64dfc-bltj4" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.933703 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d-ovndb-tls-certs\") pod \"neutron-86c8d64dfc-bltj4\" (UID: \"2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d\") " pod="openstack/neutron-86c8d64dfc-bltj4" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.943738 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkzfl\" (UniqueName: \"kubernetes.io/projected/2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d-kube-api-access-kkzfl\") pod \"neutron-86c8d64dfc-bltj4\" (UID: \"2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d\") " pod="openstack/neutron-86c8d64dfc-bltj4" Feb 19 21:21:17 crc kubenswrapper[4886]: I0219 21:21:17.999494 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-x27rt" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.019038 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44a3c2e5-e71b-427d-bf67-f6f131e459e3-public-tls-certs\") pod \"barbican-api-6669c66696-lw9qb\" (UID: \"44a3c2e5-e71b-427d-bf67-f6f131e459e3\") " pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.019095 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44a3c2e5-e71b-427d-bf67-f6f131e459e3-config-data-custom\") pod \"barbican-api-6669c66696-lw9qb\" (UID: \"44a3c2e5-e71b-427d-bf67-f6f131e459e3\") " pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.019138 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq7rc\" (UniqueName: \"kubernetes.io/projected/44a3c2e5-e71b-427d-bf67-f6f131e459e3-kube-api-access-lq7rc\") pod \"barbican-api-6669c66696-lw9qb\" (UID: \"44a3c2e5-e71b-427d-bf67-f6f131e459e3\") " pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.019159 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44a3c2e5-e71b-427d-bf67-f6f131e459e3-config-data\") pod \"barbican-api-6669c66696-lw9qb\" (UID: \"44a3c2e5-e71b-427d-bf67-f6f131e459e3\") " pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.019272 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44a3c2e5-e71b-427d-bf67-f6f131e459e3-combined-ca-bundle\") pod \"barbican-api-6669c66696-lw9qb\" (UID: \"44a3c2e5-e71b-427d-bf67-f6f131e459e3\") " pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.019428 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44a3c2e5-e71b-427d-bf67-f6f131e459e3-logs\") pod \"barbican-api-6669c66696-lw9qb\" (UID: \"44a3c2e5-e71b-427d-bf67-f6f131e459e3\") " pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.019540 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44a3c2e5-e71b-427d-bf67-f6f131e459e3-internal-tls-certs\") pod \"barbican-api-6669c66696-lw9qb\" (UID: \"44a3c2e5-e71b-427d-bf67-f6f131e459e3\") " pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.121507 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/336b4fc8-890f-4ace-baa3-587ebc3b27db-combined-ca-bundle\") pod \"336b4fc8-890f-4ace-baa3-587ebc3b27db\" (UID: \"336b4fc8-890f-4ace-baa3-587ebc3b27db\") " Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.121850 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/336b4fc8-890f-4ace-baa3-587ebc3b27db-config-data\") pod \"336b4fc8-890f-4ace-baa3-587ebc3b27db\" (UID: \"336b4fc8-890f-4ace-baa3-587ebc3b27db\") " Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.121946 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lgkd\" (UniqueName: \"kubernetes.io/projected/336b4fc8-890f-4ace-baa3-587ebc3b27db-kube-api-access-7lgkd\") pod \"336b4fc8-890f-4ace-baa3-587ebc3b27db\" (UID: \"336b4fc8-890f-4ace-baa3-587ebc3b27db\") " Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.122215 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44a3c2e5-e71b-427d-bf67-f6f131e459e3-combined-ca-bundle\") pod \"barbican-api-6669c66696-lw9qb\" (UID: \"44a3c2e5-e71b-427d-bf67-f6f131e459e3\") " pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.122282 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44a3c2e5-e71b-427d-bf67-f6f131e459e3-logs\") pod \"barbican-api-6669c66696-lw9qb\" (UID: \"44a3c2e5-e71b-427d-bf67-f6f131e459e3\") " pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.122308 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44a3c2e5-e71b-427d-bf67-f6f131e459e3-internal-tls-certs\") pod \"barbican-api-6669c66696-lw9qb\" (UID: \"44a3c2e5-e71b-427d-bf67-f6f131e459e3\") " pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.122374 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44a3c2e5-e71b-427d-bf67-f6f131e459e3-public-tls-certs\") pod \"barbican-api-6669c66696-lw9qb\" (UID: \"44a3c2e5-e71b-427d-bf67-f6f131e459e3\") " pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.122399 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44a3c2e5-e71b-427d-bf67-f6f131e459e3-config-data-custom\") pod \"barbican-api-6669c66696-lw9qb\" (UID: \"44a3c2e5-e71b-427d-bf67-f6f131e459e3\") " pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.122435 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq7rc\" (UniqueName: \"kubernetes.io/projected/44a3c2e5-e71b-427d-bf67-f6f131e459e3-kube-api-access-lq7rc\") pod \"barbican-api-6669c66696-lw9qb\" (UID: \"44a3c2e5-e71b-427d-bf67-f6f131e459e3\") " pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.122454 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44a3c2e5-e71b-427d-bf67-f6f131e459e3-config-data\") pod \"barbican-api-6669c66696-lw9qb\" (UID: \"44a3c2e5-e71b-427d-bf67-f6f131e459e3\") " pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.125220 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44a3c2e5-e71b-427d-bf67-f6f131e459e3-logs\") pod \"barbican-api-6669c66696-lw9qb\" (UID: \"44a3c2e5-e71b-427d-bf67-f6f131e459e3\") " pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.129924 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44a3c2e5-e71b-427d-bf67-f6f131e459e3-config-data\") pod \"barbican-api-6669c66696-lw9qb\" (UID: \"44a3c2e5-e71b-427d-bf67-f6f131e459e3\") " pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.134081 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86c8d64dfc-bltj4" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.134293 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44a3c2e5-e71b-427d-bf67-f6f131e459e3-internal-tls-certs\") pod \"barbican-api-6669c66696-lw9qb\" (UID: \"44a3c2e5-e71b-427d-bf67-f6f131e459e3\") " pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.137946 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/336b4fc8-890f-4ace-baa3-587ebc3b27db-kube-api-access-7lgkd" (OuterVolumeSpecName: "kube-api-access-7lgkd") pod "336b4fc8-890f-4ace-baa3-587ebc3b27db" (UID: "336b4fc8-890f-4ace-baa3-587ebc3b27db"). InnerVolumeSpecName "kube-api-access-7lgkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.142049 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44a3c2e5-e71b-427d-bf67-f6f131e459e3-combined-ca-bundle\") pod \"barbican-api-6669c66696-lw9qb\" (UID: \"44a3c2e5-e71b-427d-bf67-f6f131e459e3\") " pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.142597 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44a3c2e5-e71b-427d-bf67-f6f131e459e3-public-tls-certs\") pod \"barbican-api-6669c66696-lw9qb\" (UID: \"44a3c2e5-e71b-427d-bf67-f6f131e459e3\") " pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.152115 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44a3c2e5-e71b-427d-bf67-f6f131e459e3-config-data-custom\") pod \"barbican-api-6669c66696-lw9qb\" (UID: \"44a3c2e5-e71b-427d-bf67-f6f131e459e3\") " pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.154513 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq7rc\" (UniqueName: \"kubernetes.io/projected/44a3c2e5-e71b-427d-bf67-f6f131e459e3-kube-api-access-lq7rc\") pod \"barbican-api-6669c66696-lw9qb\" (UID: \"44a3c2e5-e71b-427d-bf67-f6f131e459e3\") " pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.209820 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.226422 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lgkd\" (UniqueName: \"kubernetes.io/projected/336b4fc8-890f-4ace-baa3-587ebc3b27db-kube-api-access-7lgkd\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.259723 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/336b4fc8-890f-4ace-baa3-587ebc3b27db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "336b4fc8-890f-4ace-baa3-587ebc3b27db" (UID: "336b4fc8-890f-4ace-baa3-587ebc3b27db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.271567 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/336b4fc8-890f-4ace-baa3-587ebc3b27db-config-data" (OuterVolumeSpecName: "config-data") pod "336b4fc8-890f-4ace-baa3-587ebc3b27db" (UID: "336b4fc8-890f-4ace-baa3-587ebc3b27db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.324755 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.324826 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.330249 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/336b4fc8-890f-4ace-baa3-587ebc3b27db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.330342 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/336b4fc8-890f-4ace-baa3-587ebc3b27db-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.871095 4886 generic.go:334] "Generic (PLEG): container finished" podID="80cfc748-e684-4b36-8c1c-a22a1ebaf84c" containerID="717ec4a9a5e32c5a2f84635678a193f6baadcec7644464e6d2fca315a33a67c5" exitCode=0 Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.874497 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-597bbcffd5-5t75q" event={"ID":"80cfc748-e684-4b36-8c1c-a22a1ebaf84c","Type":"ContainerDied","Data":"717ec4a9a5e32c5a2f84635678a193f6baadcec7644464e6d2fca315a33a67c5"} Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.891524 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-x27rt" event={"ID":"336b4fc8-890f-4ace-baa3-587ebc3b27db","Type":"ContainerDied","Data":"ae2357f768be389806e85e34407260671d23c579a4d8d370340e40fe7b6d998d"} Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.891567 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae2357f768be389806e85e34407260671d23c579a4d8d370340e40fe7b6d998d" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.891632 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-x27rt" Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.947322 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-wkgk6"] Feb 19 21:21:18 crc kubenswrapper[4886]: I0219 21:21:18.978362 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 21:21:19 crc kubenswrapper[4886]: I0219 21:21:19.012081 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6787dcd584-7jqgr"] Feb 19 21:21:19 crc kubenswrapper[4886]: I0219 21:21:19.138583 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7bdf9bd989-tvlm8"] Feb 19 21:21:19 crc kubenswrapper[4886]: W0219 21:21:19.209459 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3273c1f5_d854_4ec8_acbd_76a580892999.slice/crio-2220bd8dfe1909f988b7e563628a12981c8df365d42dfcd8d1e2c0049d02eb66 WatchSource:0}: Error finding container 2220bd8dfe1909f988b7e563628a12981c8df365d42dfcd8d1e2c0049d02eb66: Status 404 returned error can't find the container with id 2220bd8dfe1909f988b7e563628a12981c8df365d42dfcd8d1e2c0049d02eb66 Feb 19 21:21:19 crc kubenswrapper[4886]: I0219 21:21:19.583714 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 21:21:19 crc kubenswrapper[4886]: I0219 21:21:19.599230 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7dd5c64c6d-ggxlj"] Feb 19 21:21:19 crc kubenswrapper[4886]: I0219 21:21:19.624571 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6669c66696-lw9qb"] Feb 19 21:21:19 crc kubenswrapper[4886]: I0219 21:21:19.675162 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-mdm8x"] Feb 19 21:21:19 crc kubenswrapper[4886]: I0219 21:21:19.686852 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86c8d64dfc-bltj4"] Feb 19 21:21:19 crc kubenswrapper[4886]: I0219 21:21:19.761934 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-597bbcffd5-5t75q" podUID="80cfc748-e684-4b36-8c1c-a22a1ebaf84c" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.194:9696/\": dial tcp 10.217.0.194:9696: connect: connection refused" Feb 19 21:21:19 crc kubenswrapper[4886]: I0219 21:21:19.902513 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" event={"ID":"40edad44-68bc-4fbc-9ceb-881436065e53","Type":"ContainerStarted","Data":"05c72c6af8cbe0432e83f5b69fe072ff68c2eef963b711a816723a571d1d3343"} Feb 19 21:21:19 crc kubenswrapper[4886]: I0219 21:21:19.902841 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" event={"ID":"40edad44-68bc-4fbc-9ceb-881436065e53","Type":"ContainerStarted","Data":"867dfc1a79d4e129a658a36a6fa6a4027185e6370f328279b87f417a636058cf"} Feb 19 21:21:19 crc kubenswrapper[4886]: I0219 21:21:19.903639 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6787dcd584-7jqgr" event={"ID":"df8f4be1-2248-46ee-a97e-1121038c5fd8","Type":"ContainerStarted","Data":"fd8290c2a749eb58dc11a5147e77eff9b8f6e82876e3a6f4653f336a647dbc15"} Feb 19 21:21:19 crc kubenswrapper[4886]: I0219 21:21:19.903680 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6787dcd584-7jqgr" event={"ID":"df8f4be1-2248-46ee-a97e-1121038c5fd8","Type":"ContainerStarted","Data":"f01d25a265bb33cb5ea41929056791a921c9060fd81731177b07667844aab5a9"} Feb 19 21:21:19 crc kubenswrapper[4886]: I0219 21:21:19.904528 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff1fe0d2-edc0-494f-a9a7-cf568a97929f","Type":"ContainerStarted","Data":"5cc87649c8aa5d3364865a88ee28400172ac611252af9a4e1780ad469dec86c6"} Feb 19 21:21:19 crc kubenswrapper[4886]: I0219 21:21:19.905467 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7bdf9bd989-tvlm8" event={"ID":"3273c1f5-d854-4ec8-acbd-76a580892999","Type":"ContainerStarted","Data":"2220bd8dfe1909f988b7e563628a12981c8df365d42dfcd8d1e2c0049d02eb66"} Feb 19 21:21:20 crc kubenswrapper[4886]: I0219 21:21:20.916445 4886 generic.go:334] "Generic (PLEG): container finished" podID="40edad44-68bc-4fbc-9ceb-881436065e53" containerID="05c72c6af8cbe0432e83f5b69fe072ff68c2eef963b711a816723a571d1d3343" exitCode=0 Feb 19 21:21:20 crc kubenswrapper[4886]: I0219 21:21:20.916485 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" event={"ID":"40edad44-68bc-4fbc-9ceb-881436065e53","Type":"ContainerDied","Data":"05c72c6af8cbe0432e83f5b69fe072ff68c2eef963b711a816723a571d1d3343"} Feb 19 21:21:21 crc kubenswrapper[4886]: I0219 21:21:21.267959 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 21:21:22 crc kubenswrapper[4886]: W0219 21:21:22.307957 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cd08739_53bf_4b73_a5c9_8d51d2a7f6c1.slice/crio-981e9bc4aeaa342da12104e807c3c68f1882862555566b26498982f0fe0e2ab2 WatchSource:0}: Error finding container 981e9bc4aeaa342da12104e807c3c68f1882862555566b26498982f0fe0e2ab2: Status 404 returned error can't find the container with id 981e9bc4aeaa342da12104e807c3c68f1882862555566b26498982f0fe0e2ab2 Feb 19 21:21:22 crc kubenswrapper[4886]: W0219 21:21:22.309780 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3aa52fc6_a58f_4b63_bb6d_eed86607b807.slice/crio-a9f8aead3e70d8ba6c4f312a5268f4a0f99a931d58ce092d6ef7a126926b5a89 WatchSource:0}: Error finding container a9f8aead3e70d8ba6c4f312a5268f4a0f99a931d58ce092d6ef7a126926b5a89: Status 404 returned error can't find the container with id a9f8aead3e70d8ba6c4f312a5268f4a0f99a931d58ce092d6ef7a126926b5a89 Feb 19 21:21:22 crc kubenswrapper[4886]: W0219 21:21:22.320807 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d0b1fe4_d71b_4b54_bdd5_dee64b03ff2d.slice/crio-3fd21b8d8648bd18fa6861ab755a1d0a05328fc7dac767004b2034e5e4ab0e9b WatchSource:0}: Error finding container 3fd21b8d8648bd18fa6861ab755a1d0a05328fc7dac767004b2034e5e4ab0e9b: Status 404 returned error can't find the container with id 3fd21b8d8648bd18fa6861ab755a1d0a05328fc7dac767004b2034e5e4ab0e9b Feb 19 21:21:22 crc kubenswrapper[4886]: I0219 21:21:22.949504 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86c8d64dfc-bltj4" event={"ID":"2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d","Type":"ContainerStarted","Data":"3fd21b8d8648bd18fa6861ab755a1d0a05328fc7dac767004b2034e5e4ab0e9b"} Feb 19 21:21:22 crc kubenswrapper[4886]: I0219 21:21:22.950781 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7dd5c64c6d-ggxlj" event={"ID":"3aa52fc6-a58f-4b63-bb6d-eed86607b807","Type":"ContainerStarted","Data":"a9f8aead3e70d8ba6c4f312a5268f4a0f99a931d58ce092d6ef7a126926b5a89"} Feb 19 21:21:22 crc kubenswrapper[4886]: I0219 21:21:22.952481 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1","Type":"ContainerStarted","Data":"981e9bc4aeaa342da12104e807c3c68f1882862555566b26498982f0fe0e2ab2"} Feb 19 21:21:22 crc kubenswrapper[4886]: I0219 21:21:22.954965 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6669c66696-lw9qb" event={"ID":"44a3c2e5-e71b-427d-bf67-f6f131e459e3","Type":"ContainerStarted","Data":"a18fdefff6f466b81534ac9446401220f65b90eba36a6640de5d0bc052d8f931"} Feb 19 21:21:22 crc kubenswrapper[4886]: I0219 21:21:22.955009 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6669c66696-lw9qb" event={"ID":"44a3c2e5-e71b-427d-bf67-f6f131e459e3","Type":"ContainerStarted","Data":"1bb3d9e86dab44b9e25955c33ef84a49857a9204b53e2ae1df5c888abd9222e8"} Feb 19 21:21:22 crc kubenswrapper[4886]: I0219 21:21:22.957378 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-mdm8x" event={"ID":"286b0cdb-3a0d-4de1-a288-ccf9494130ef","Type":"ContainerStarted","Data":"afdb586f14aacd38132fbe709431af1ec9794c0ab2556bb39d90f316838ebcc2"} Feb 19 21:21:23 crc kubenswrapper[4886]: I0219 21:21:23.968847 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6787dcd584-7jqgr" event={"ID":"df8f4be1-2248-46ee-a97e-1121038c5fd8","Type":"ContainerStarted","Data":"4cfcf2730a6cdaa3b35b443f06544160e83d0903278b5ae1e6d201fcdd65ea52"} Feb 19 21:21:23 crc kubenswrapper[4886]: I0219 21:21:23.969067 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6787dcd584-7jqgr" Feb 19 21:21:23 crc kubenswrapper[4886]: I0219 21:21:23.969252 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6787dcd584-7jqgr" Feb 19 21:21:23 crc kubenswrapper[4886]: I0219 21:21:23.970577 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff1fe0d2-edc0-494f-a9a7-cf568a97929f","Type":"ContainerStarted","Data":"3c5ef0329a03943abf2ff22ea7ebc859f3d5bd711a71073ca623ddd364596031"} Feb 19 21:21:23 crc kubenswrapper[4886]: I0219 21:21:23.979459 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86c8d64dfc-bltj4" event={"ID":"2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d","Type":"ContainerStarted","Data":"c5bfc6936a06ec2ba9aa81d9f8aef8797d055a6b30eb50dfe1c8e98ab69f2b7e"} Feb 19 21:21:23 crc kubenswrapper[4886]: I0219 21:21:23.981959 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1","Type":"ContainerStarted","Data":"edcf4d93db363fcecb58646a89eb8b9330d43628486ebc3c4241236bee79acf6"} Feb 19 21:21:23 crc kubenswrapper[4886]: I0219 21:21:23.985099 4886 generic.go:334] "Generic (PLEG): container finished" podID="286b0cdb-3a0d-4de1-a288-ccf9494130ef" containerID="334c67829564c63bbfa71331efb5691d40d7f48864d3b86e12968b85f6dc6a0d" exitCode=0 Feb 19 21:21:23 crc kubenswrapper[4886]: I0219 21:21:23.985144 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-mdm8x" event={"ID":"286b0cdb-3a0d-4de1-a288-ccf9494130ef","Type":"ContainerDied","Data":"334c67829564c63bbfa71331efb5691d40d7f48864d3b86e12968b85f6dc6a0d"} Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.008211 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38d7aae-1881-4142-a122-f3bdbaf8fbcb","Type":"ContainerStarted","Data":"b430eca17ea48d505d06112f0ca23c77251fda01348c1588551241a92dd83fae"} Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.008374 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6787dcd584-7jqgr" podStartSLOduration=10.008354648 podStartE2EDuration="10.008354648s" podCreationTimestamp="2026-02-19 21:21:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:21:23.990060106 +0000 UTC m=+1314.617903156" watchObservedRunningTime="2026-02-19 21:21:24.008354648 +0000 UTC m=+1314.636197698" Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.008460 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.008527 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c38d7aae-1881-4142-a122-f3bdbaf8fbcb" containerName="proxy-httpd" containerID="cri-o://b430eca17ea48d505d06112f0ca23c77251fda01348c1588551241a92dd83fae" gracePeriod=30 Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.008646 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c38d7aae-1881-4142-a122-f3bdbaf8fbcb" containerName="sg-core" containerID="cri-o://2286ae9b097737c4a33cea8f51025a196899ac3bc0053aef6509f9f0352189b5" gracePeriod=30 Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.008719 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c38d7aae-1881-4142-a122-f3bdbaf8fbcb" containerName="ceilometer-notification-agent" containerID="cri-o://76e28b51db82e8dd635d56284b5ce75115175cd16842335e6012ad0356d11f1c" gracePeriod=30 Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.008406 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c38d7aae-1881-4142-a122-f3bdbaf8fbcb" containerName="ceilometer-central-agent" containerID="cri-o://f7df434bd430e1a2b5656b814f4c72ba8b71906ae3ac9e7c630c8aadab0d9204" gracePeriod=30 Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.030044 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" event={"ID":"40edad44-68bc-4fbc-9ceb-881436065e53","Type":"ContainerStarted","Data":"649464e25b83097625c83746e1b08fffeb5dec3a70bb1c7109d309483a942add"} Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.030477 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.062902 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.49208602 podStartE2EDuration="1m7.062880214s" podCreationTimestamp="2026-02-19 21:20:17 +0000 UTC" firstStartedPulling="2026-02-19 21:20:19.140705984 +0000 UTC m=+1249.768549034" lastFinishedPulling="2026-02-19 21:21:22.711500178 +0000 UTC m=+1313.339343228" observedRunningTime="2026-02-19 21:21:24.039168839 +0000 UTC m=+1314.667011889" watchObservedRunningTime="2026-02-19 21:21:24.062880214 +0000 UTC m=+1314.690723264" Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.076903 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" podStartSLOduration=7.07688555 podStartE2EDuration="7.07688555s" podCreationTimestamp="2026-02-19 21:21:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:21:24.058924407 +0000 UTC m=+1314.686767457" watchObservedRunningTime="2026-02-19 21:21:24.07688555 +0000 UTC m=+1314.704728590" Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.665611 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-mdm8x" Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.816571 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-ovsdbserver-nb\") pod \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\" (UID: \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\") " Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.816661 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-config\") pod \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\" (UID: \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\") " Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.816705 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lsrc\" (UniqueName: \"kubernetes.io/projected/286b0cdb-3a0d-4de1-a288-ccf9494130ef-kube-api-access-4lsrc\") pod \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\" (UID: \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\") " Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.816764 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-ovsdbserver-sb\") pod \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\" (UID: \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\") " Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.816791 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-dns-swift-storage-0\") pod \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\" (UID: \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\") " Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.816896 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-dns-svc\") pod \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\" (UID: \"286b0cdb-3a0d-4de1-a288-ccf9494130ef\") " Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.828886 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/286b0cdb-3a0d-4de1-a288-ccf9494130ef-kube-api-access-4lsrc" (OuterVolumeSpecName: "kube-api-access-4lsrc") pod "286b0cdb-3a0d-4de1-a288-ccf9494130ef" (UID: "286b0cdb-3a0d-4de1-a288-ccf9494130ef"). InnerVolumeSpecName "kube-api-access-4lsrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.853296 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "286b0cdb-3a0d-4de1-a288-ccf9494130ef" (UID: "286b0cdb-3a0d-4de1-a288-ccf9494130ef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.853845 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "286b0cdb-3a0d-4de1-a288-ccf9494130ef" (UID: "286b0cdb-3a0d-4de1-a288-ccf9494130ef"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.871686 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-config" (OuterVolumeSpecName: "config") pod "286b0cdb-3a0d-4de1-a288-ccf9494130ef" (UID: "286b0cdb-3a0d-4de1-a288-ccf9494130ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.879854 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "286b0cdb-3a0d-4de1-a288-ccf9494130ef" (UID: "286b0cdb-3a0d-4de1-a288-ccf9494130ef"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.892834 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "286b0cdb-3a0d-4de1-a288-ccf9494130ef" (UID: "286b0cdb-3a0d-4de1-a288-ccf9494130ef"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.918943 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.918971 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lsrc\" (UniqueName: \"kubernetes.io/projected/286b0cdb-3a0d-4de1-a288-ccf9494130ef-kube-api-access-4lsrc\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.918982 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.918990 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.918999 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:24 crc kubenswrapper[4886]: I0219 21:21:24.919006 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/286b0cdb-3a0d-4de1-a288-ccf9494130ef-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:25 crc kubenswrapper[4886]: I0219 21:21:25.059637 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86c8d64dfc-bltj4" event={"ID":"2d0b1fe4-d71b-4b54-bdd5-dee64b03ff2d","Type":"ContainerStarted","Data":"3efe76293f2f1827a353f88957b908c2692a7e28834aab7023afe70f2130a284"} Feb 19 21:21:25 crc kubenswrapper[4886]: I0219 21:21:25.060859 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-86c8d64dfc-bltj4" Feb 19 21:21:25 crc kubenswrapper[4886]: I0219 21:21:25.069705 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7dd5c64c6d-ggxlj" event={"ID":"3aa52fc6-a58f-4b63-bb6d-eed86607b807","Type":"ContainerStarted","Data":"2ed9d86674d544741ca886637697f91960d658cb385c6d2e424beb63fa3ec248"} Feb 19 21:21:25 crc kubenswrapper[4886]: I0219 21:21:25.071881 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7bdf9bd989-tvlm8" event={"ID":"3273c1f5-d854-4ec8-acbd-76a580892999","Type":"ContainerStarted","Data":"7e35ebef102ee0a9994853fd02178d9fa12acf9ac7237de37cd017d02db5eeda"} Feb 19 21:21:25 crc kubenswrapper[4886]: I0219 21:21:25.080035 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6669c66696-lw9qb" event={"ID":"44a3c2e5-e71b-427d-bf67-f6f131e459e3","Type":"ContainerStarted","Data":"1513cfd9ac99a62a61cff8f38220f8db2cc54f100a6fe36a80c279a39d7f2981"} Feb 19 21:21:25 crc kubenswrapper[4886]: I0219 21:21:25.080640 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:25 crc kubenswrapper[4886]: I0219 21:21:25.080704 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:25 crc kubenswrapper[4886]: I0219 21:21:25.084379 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-mdm8x" event={"ID":"286b0cdb-3a0d-4de1-a288-ccf9494130ef","Type":"ContainerDied","Data":"afdb586f14aacd38132fbe709431af1ec9794c0ab2556bb39d90f316838ebcc2"} Feb 19 21:21:25 crc kubenswrapper[4886]: I0219 21:21:25.084406 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-mdm8x" Feb 19 21:21:25 crc kubenswrapper[4886]: I0219 21:21:25.084435 4886 scope.go:117] "RemoveContainer" containerID="334c67829564c63bbfa71331efb5691d40d7f48864d3b86e12968b85f6dc6a0d" Feb 19 21:21:25 crc kubenswrapper[4886]: I0219 21:21:25.097850 4886 generic.go:334] "Generic (PLEG): container finished" podID="c38d7aae-1881-4142-a122-f3bdbaf8fbcb" containerID="b430eca17ea48d505d06112f0ca23c77251fda01348c1588551241a92dd83fae" exitCode=0 Feb 19 21:21:25 crc kubenswrapper[4886]: I0219 21:21:25.097878 4886 generic.go:334] "Generic (PLEG): container finished" podID="c38d7aae-1881-4142-a122-f3bdbaf8fbcb" containerID="2286ae9b097737c4a33cea8f51025a196899ac3bc0053aef6509f9f0352189b5" exitCode=2 Feb 19 21:21:25 crc kubenswrapper[4886]: I0219 21:21:25.097886 4886 generic.go:334] "Generic (PLEG): container finished" podID="c38d7aae-1881-4142-a122-f3bdbaf8fbcb" containerID="f7df434bd430e1a2b5656b814f4c72ba8b71906ae3ac9e7c630c8aadab0d9204" exitCode=0 Feb 19 21:21:25 crc kubenswrapper[4886]: I0219 21:21:25.098814 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38d7aae-1881-4142-a122-f3bdbaf8fbcb","Type":"ContainerDied","Data":"b430eca17ea48d505d06112f0ca23c77251fda01348c1588551241a92dd83fae"} Feb 19 21:21:25 crc kubenswrapper[4886]: I0219 21:21:25.098849 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38d7aae-1881-4142-a122-f3bdbaf8fbcb","Type":"ContainerDied","Data":"2286ae9b097737c4a33cea8f51025a196899ac3bc0053aef6509f9f0352189b5"} Feb 19 21:21:25 crc kubenswrapper[4886]: I0219 21:21:25.098859 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38d7aae-1881-4142-a122-f3bdbaf8fbcb","Type":"ContainerDied","Data":"f7df434bd430e1a2b5656b814f4c72ba8b71906ae3ac9e7c630c8aadab0d9204"} Feb 19 21:21:25 crc kubenswrapper[4886]: I0219 21:21:25.112952 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-86c8d64dfc-bltj4" podStartSLOduration=8.11293486 podStartE2EDuration="8.11293486s" podCreationTimestamp="2026-02-19 21:21:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:21:25.092760422 +0000 UTC m=+1315.720603472" watchObservedRunningTime="2026-02-19 21:21:25.11293486 +0000 UTC m=+1315.740777910" Feb 19 21:21:25 crc kubenswrapper[4886]: I0219 21:21:25.113491 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6669c66696-lw9qb" podStartSLOduration=8.113486094 podStartE2EDuration="8.113486094s" podCreationTimestamp="2026-02-19 21:21:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:21:25.113088474 +0000 UTC m=+1315.740931524" watchObservedRunningTime="2026-02-19 21:21:25.113486094 +0000 UTC m=+1315.741329144" Feb 19 21:21:25 crc kubenswrapper[4886]: I0219 21:21:25.253321 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-mdm8x"] Feb 19 21:21:25 crc kubenswrapper[4886]: I0219 21:21:25.276839 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-mdm8x"] Feb 19 21:21:26 crc kubenswrapper[4886]: I0219 21:21:26.112401 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff1fe0d2-edc0-494f-a9a7-cf568a97929f","Type":"ContainerStarted","Data":"e3ccc82201719983389ca04b1439ee8b774cf1a8ed9b0e4599b83b121c286ade"} Feb 19 21:21:26 crc kubenswrapper[4886]: I0219 21:21:26.114781 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7dd5c64c6d-ggxlj" event={"ID":"3aa52fc6-a58f-4b63-bb6d-eed86607b807","Type":"ContainerStarted","Data":"2d5417b62928767a005fad6837b05d6ba95a039894f931b90e719b638966db0a"} Feb 19 21:21:26 crc kubenswrapper[4886]: I0219 21:21:26.118465 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7bdf9bd989-tvlm8" event={"ID":"3273c1f5-d854-4ec8-acbd-76a580892999","Type":"ContainerStarted","Data":"bb1ebed745d54f411eaeeb8c69834eb804665bac35bea7272dd967e4ad39091a"} Feb 19 21:21:26 crc kubenswrapper[4886]: I0219 21:21:26.120796 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1","Type":"ContainerStarted","Data":"e20466706196d5e4828283593ca2ccb2364b08ff2aa2b105b1d7546978ba93c7"} Feb 19 21:21:26 crc kubenswrapper[4886]: I0219 21:21:26.120948 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1" containerName="cinder-api-log" containerID="cri-o://edcf4d93db363fcecb58646a89eb8b9330d43628486ebc3c4241236bee79acf6" gracePeriod=30 Feb 19 21:21:26 crc kubenswrapper[4886]: I0219 21:21:26.121304 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 21:21:26 crc kubenswrapper[4886]: I0219 21:21:26.121354 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1" containerName="cinder-api" containerID="cri-o://e20466706196d5e4828283593ca2ccb2364b08ff2aa2b105b1d7546978ba93c7" gracePeriod=30 Feb 19 21:21:26 crc kubenswrapper[4886]: I0219 21:21:26.149107 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.368778126 podStartE2EDuration="10.149090172s" podCreationTimestamp="2026-02-19 21:21:16 +0000 UTC" firstStartedPulling="2026-02-19 21:21:18.980215053 +0000 UTC m=+1309.608058103" lastFinishedPulling="2026-02-19 21:21:22.760527099 +0000 UTC m=+1313.388370149" observedRunningTime="2026-02-19 21:21:26.143437323 +0000 UTC m=+1316.771280413" watchObservedRunningTime="2026-02-19 21:21:26.149090172 +0000 UTC m=+1316.776933222" Feb 19 21:21:26 crc kubenswrapper[4886]: I0219 21:21:26.212214 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7dd5c64c6d-ggxlj" podStartSLOduration=10.436856187 podStartE2EDuration="12.21218881s" podCreationTimestamp="2026-02-19 21:21:14 +0000 UTC" firstStartedPulling="2026-02-19 21:21:22.312136338 +0000 UTC m=+1312.939979388" lastFinishedPulling="2026-02-19 21:21:24.087468961 +0000 UTC m=+1314.715312011" observedRunningTime="2026-02-19 21:21:26.172285295 +0000 UTC m=+1316.800128345" watchObservedRunningTime="2026-02-19 21:21:26.21218881 +0000 UTC m=+1316.840031890" Feb 19 21:21:26 crc kubenswrapper[4886]: I0219 21:21:26.223004 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=9.222985067 podStartE2EDuration="9.222985067s" podCreationTimestamp="2026-02-19 21:21:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:21:26.199431705 +0000 UTC m=+1316.827274755" watchObservedRunningTime="2026-02-19 21:21:26.222985067 +0000 UTC m=+1316.850828117" Feb 19 21:21:26 crc kubenswrapper[4886]: I0219 21:21:26.244587 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7bdf9bd989-tvlm8" podStartSLOduration=7.453811176 podStartE2EDuration="12.24456986s" podCreationTimestamp="2026-02-19 21:21:14 +0000 UTC" firstStartedPulling="2026-02-19 21:21:19.213702848 +0000 UTC m=+1309.841545898" lastFinishedPulling="2026-02-19 21:21:24.004461532 +0000 UTC m=+1314.632304582" observedRunningTime="2026-02-19 21:21:26.222824423 +0000 UTC m=+1316.850667473" watchObservedRunningTime="2026-02-19 21:21:26.24456986 +0000 UTC m=+1316.872412900" Feb 19 21:21:26 crc kubenswrapper[4886]: I0219 21:21:26.626407 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="286b0cdb-3a0d-4de1-a288-ccf9494130ef" path="/var/lib/kubelet/pods/286b0cdb-3a0d-4de1-a288-ccf9494130ef/volumes" Feb 19 21:21:27 crc kubenswrapper[4886]: I0219 21:21:27.135962 4886 generic.go:334] "Generic (PLEG): container finished" podID="9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1" containerID="edcf4d93db363fcecb58646a89eb8b9330d43628486ebc3c4241236bee79acf6" exitCode=143 Feb 19 21:21:27 crc kubenswrapper[4886]: I0219 21:21:27.137070 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1","Type":"ContainerDied","Data":"edcf4d93db363fcecb58646a89eb8b9330d43628486ebc3c4241236bee79acf6"} Feb 19 21:21:27 crc kubenswrapper[4886]: I0219 21:21:27.363301 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 21:21:27 crc kubenswrapper[4886]: I0219 21:21:27.622747 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" Feb 19 21:21:27 crc kubenswrapper[4886]: I0219 21:21:27.854663 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-79lgs"] Feb 19 21:21:27 crc kubenswrapper[4886]: I0219 21:21:27.855191 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-79lgs" podUID="9596f93c-54cf-4dfd-b8f7-f6a6c19be356" containerName="dnsmasq-dns" containerID="cri-o://c433d8eb4b1761d7fcbad1a85aa29e7185d1d90f9e2bbf54db0a7722ea9824e8" gracePeriod=10 Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.157873 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.167344 4886 generic.go:334] "Generic (PLEG): container finished" podID="80cfc748-e684-4b36-8c1c-a22a1ebaf84c" containerID="2982cf6a6d9667622f1d05f11777ad97a290cdc543edecd5ccdd02b07381881f" exitCode=0 Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.168142 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-597bbcffd5-5t75q" event={"ID":"80cfc748-e684-4b36-8c1c-a22a1ebaf84c","Type":"ContainerDied","Data":"2982cf6a6d9667622f1d05f11777ad97a290cdc543edecd5ccdd02b07381881f"} Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.168181 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-597bbcffd5-5t75q" event={"ID":"80cfc748-e684-4b36-8c1c-a22a1ebaf84c","Type":"ContainerDied","Data":"f2eedbea633b3f110cbf3df8f887d7d0836d25ff5ddbb15ddd434a43cd2ea5a2"} Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.168198 4886 scope.go:117] "RemoveContainer" containerID="717ec4a9a5e32c5a2f84635678a193f6baadcec7644464e6d2fca315a33a67c5" Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.243817 4886 scope.go:117] "RemoveContainer" containerID="2982cf6a6d9667622f1d05f11777ad97a290cdc543edecd5ccdd02b07381881f" Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.247055 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-combined-ca-bundle\") pod \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.247120 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-httpd-config\") pod \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.247347 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8vxk\" (UniqueName: \"kubernetes.io/projected/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-kube-api-access-p8vxk\") pod \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.248133 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-ovndb-tls-certs\") pod \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.248226 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-config\") pod \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.248250 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-public-tls-certs\") pod \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.248297 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-internal-tls-certs\") pod \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\" (UID: \"80cfc748-e684-4b36-8c1c-a22a1ebaf84c\") " Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.257024 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "80cfc748-e684-4b36-8c1c-a22a1ebaf84c" (UID: "80cfc748-e684-4b36-8c1c-a22a1ebaf84c"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.257185 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-kube-api-access-p8vxk" (OuterVolumeSpecName: "kube-api-access-p8vxk") pod "80cfc748-e684-4b36-8c1c-a22a1ebaf84c" (UID: "80cfc748-e684-4b36-8c1c-a22a1ebaf84c"). InnerVolumeSpecName "kube-api-access-p8vxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.345458 4886 scope.go:117] "RemoveContainer" containerID="717ec4a9a5e32c5a2f84635678a193f6baadcec7644464e6d2fca315a33a67c5" Feb 19 21:21:28 crc kubenswrapper[4886]: E0219 21:21:28.359826 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"717ec4a9a5e32c5a2f84635678a193f6baadcec7644464e6d2fca315a33a67c5\": container with ID starting with 717ec4a9a5e32c5a2f84635678a193f6baadcec7644464e6d2fca315a33a67c5 not found: ID does not exist" containerID="717ec4a9a5e32c5a2f84635678a193f6baadcec7644464e6d2fca315a33a67c5" Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.359863 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"717ec4a9a5e32c5a2f84635678a193f6baadcec7644464e6d2fca315a33a67c5"} err="failed to get container status \"717ec4a9a5e32c5a2f84635678a193f6baadcec7644464e6d2fca315a33a67c5\": rpc error: code = NotFound desc = could not find container \"717ec4a9a5e32c5a2f84635678a193f6baadcec7644464e6d2fca315a33a67c5\": container with ID starting with 717ec4a9a5e32c5a2f84635678a193f6baadcec7644464e6d2fca315a33a67c5 not found: ID does not exist" Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.359886 4886 scope.go:117] "RemoveContainer" containerID="2982cf6a6d9667622f1d05f11777ad97a290cdc543edecd5ccdd02b07381881f" Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.361658 4886 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.361679 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8vxk\" (UniqueName: \"kubernetes.io/projected/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-kube-api-access-p8vxk\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:28 crc kubenswrapper[4886]: E0219 21:21:28.366966 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2982cf6a6d9667622f1d05f11777ad97a290cdc543edecd5ccdd02b07381881f\": container with ID starting with 2982cf6a6d9667622f1d05f11777ad97a290cdc543edecd5ccdd02b07381881f not found: ID does not exist" containerID="2982cf6a6d9667622f1d05f11777ad97a290cdc543edecd5ccdd02b07381881f" Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.367003 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2982cf6a6d9667622f1d05f11777ad97a290cdc543edecd5ccdd02b07381881f"} err="failed to get container status \"2982cf6a6d9667622f1d05f11777ad97a290cdc543edecd5ccdd02b07381881f\": rpc error: code = NotFound desc = could not find container \"2982cf6a6d9667622f1d05f11777ad97a290cdc543edecd5ccdd02b07381881f\": container with ID starting with 2982cf6a6d9667622f1d05f11777ad97a290cdc543edecd5ccdd02b07381881f not found: ID does not exist" Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.405377 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "80cfc748-e684-4b36-8c1c-a22a1ebaf84c" (UID: "80cfc748-e684-4b36-8c1c-a22a1ebaf84c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.413207 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80cfc748-e684-4b36-8c1c-a22a1ebaf84c" (UID: "80cfc748-e684-4b36-8c1c-a22a1ebaf84c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.449617 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "80cfc748-e684-4b36-8c1c-a22a1ebaf84c" (UID: "80cfc748-e684-4b36-8c1c-a22a1ebaf84c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.455737 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "80cfc748-e684-4b36-8c1c-a22a1ebaf84c" (UID: "80cfc748-e684-4b36-8c1c-a22a1ebaf84c"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.456357 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-config" (OuterVolumeSpecName: "config") pod "80cfc748-e684-4b36-8c1c-a22a1ebaf84c" (UID: "80cfc748-e684-4b36-8c1c-a22a1ebaf84c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.463228 4886 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.463284 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.463294 4886 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.463304 4886 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:28 crc kubenswrapper[4886]: I0219 21:21:28.463314 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80cfc748-e684-4b36-8c1c-a22a1ebaf84c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.075087 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-79lgs" Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.181177 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-ovsdbserver-sb\") pod \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\" (UID: \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\") " Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.181397 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-dns-swift-storage-0\") pod \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\" (UID: \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\") " Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.181422 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-config\") pod \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\" (UID: \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\") " Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.181494 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttcw2\" (UniqueName: \"kubernetes.io/projected/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-kube-api-access-ttcw2\") pod \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\" (UID: \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\") " Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.182192 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-ovsdbserver-nb\") pod \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\" (UID: \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\") " Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.182352 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-dns-svc\") pod \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\" (UID: \"9596f93c-54cf-4dfd-b8f7-f6a6c19be356\") " Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.186417 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-597bbcffd5-5t75q" Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.196784 4886 generic.go:334] "Generic (PLEG): container finished" podID="9596f93c-54cf-4dfd-b8f7-f6a6c19be356" containerID="c433d8eb4b1761d7fcbad1a85aa29e7185d1d90f9e2bbf54db0a7722ea9824e8" exitCode=0 Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.196824 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-79lgs" event={"ID":"9596f93c-54cf-4dfd-b8f7-f6a6c19be356","Type":"ContainerDied","Data":"c433d8eb4b1761d7fcbad1a85aa29e7185d1d90f9e2bbf54db0a7722ea9824e8"} Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.196849 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-79lgs" event={"ID":"9596f93c-54cf-4dfd-b8f7-f6a6c19be356","Type":"ContainerDied","Data":"e7f6321614df36014e9429b06e127219b5e16816e1f8824eb874466e1ce79cbe"} Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.196869 4886 scope.go:117] "RemoveContainer" containerID="c433d8eb4b1761d7fcbad1a85aa29e7185d1d90f9e2bbf54db0a7722ea9824e8" Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.196984 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-79lgs" Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.209508 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-kube-api-access-ttcw2" (OuterVolumeSpecName: "kube-api-access-ttcw2") pod "9596f93c-54cf-4dfd-b8f7-f6a6c19be356" (UID: "9596f93c-54cf-4dfd-b8f7-f6a6c19be356"). InnerVolumeSpecName "kube-api-access-ttcw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.243935 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-597bbcffd5-5t75q"] Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.286812 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttcw2\" (UniqueName: \"kubernetes.io/projected/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-kube-api-access-ttcw2\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.291008 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6787dcd584-7jqgr" Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.293734 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9596f93c-54cf-4dfd-b8f7-f6a6c19be356" (UID: "9596f93c-54cf-4dfd-b8f7-f6a6c19be356"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.293041 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-597bbcffd5-5t75q"] Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.303738 4886 scope.go:117] "RemoveContainer" containerID="b7fb84b92549d696443cc10f882ef5c8962cb1c8af8d8e284870dcc9a08515b0" Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.361611 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-config" (OuterVolumeSpecName: "config") pod "9596f93c-54cf-4dfd-b8f7-f6a6c19be356" (UID: "9596f93c-54cf-4dfd-b8f7-f6a6c19be356"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.384486 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9596f93c-54cf-4dfd-b8f7-f6a6c19be356" (UID: "9596f93c-54cf-4dfd-b8f7-f6a6c19be356"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.390637 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.390666 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.390675 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.395231 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9596f93c-54cf-4dfd-b8f7-f6a6c19be356" (UID: "9596f93c-54cf-4dfd-b8f7-f6a6c19be356"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.402712 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9596f93c-54cf-4dfd-b8f7-f6a6c19be356" (UID: "9596f93c-54cf-4dfd-b8f7-f6a6c19be356"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.492327 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.492627 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9596f93c-54cf-4dfd-b8f7-f6a6c19be356-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.618461 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-79lgs"] Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.625453 4886 scope.go:117] "RemoveContainer" containerID="c433d8eb4b1761d7fcbad1a85aa29e7185d1d90f9e2bbf54db0a7722ea9824e8" Feb 19 21:21:29 crc kubenswrapper[4886]: E0219 21:21:29.626001 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c433d8eb4b1761d7fcbad1a85aa29e7185d1d90f9e2bbf54db0a7722ea9824e8\": container with ID starting with c433d8eb4b1761d7fcbad1a85aa29e7185d1d90f9e2bbf54db0a7722ea9824e8 not found: ID does not exist" containerID="c433d8eb4b1761d7fcbad1a85aa29e7185d1d90f9e2bbf54db0a7722ea9824e8" Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.626037 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c433d8eb4b1761d7fcbad1a85aa29e7185d1d90f9e2bbf54db0a7722ea9824e8"} err="failed to get container status \"c433d8eb4b1761d7fcbad1a85aa29e7185d1d90f9e2bbf54db0a7722ea9824e8\": rpc error: code = NotFound desc = could not find container \"c433d8eb4b1761d7fcbad1a85aa29e7185d1d90f9e2bbf54db0a7722ea9824e8\": container with ID starting with c433d8eb4b1761d7fcbad1a85aa29e7185d1d90f9e2bbf54db0a7722ea9824e8 not found: ID does not exist" Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.626057 4886 scope.go:117] "RemoveContainer" containerID="b7fb84b92549d696443cc10f882ef5c8962cb1c8af8d8e284870dcc9a08515b0" Feb 19 21:21:29 crc kubenswrapper[4886]: E0219 21:21:29.626415 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7fb84b92549d696443cc10f882ef5c8962cb1c8af8d8e284870dcc9a08515b0\": container with ID starting with b7fb84b92549d696443cc10f882ef5c8962cb1c8af8d8e284870dcc9a08515b0 not found: ID does not exist" containerID="b7fb84b92549d696443cc10f882ef5c8962cb1c8af8d8e284870dcc9a08515b0" Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.626450 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7fb84b92549d696443cc10f882ef5c8962cb1c8af8d8e284870dcc9a08515b0"} err="failed to get container status \"b7fb84b92549d696443cc10f882ef5c8962cb1c8af8d8e284870dcc9a08515b0\": rpc error: code = NotFound desc = could not find container \"b7fb84b92549d696443cc10f882ef5c8962cb1c8af8d8e284870dcc9a08515b0\": container with ID starting with b7fb84b92549d696443cc10f882ef5c8962cb1c8af8d8e284870dcc9a08515b0 not found: ID does not exist" Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.628368 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-79lgs"] Feb 19 21:21:29 crc kubenswrapper[4886]: I0219 21:21:29.931179 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.015188 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-sg-core-conf-yaml\") pod \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.015611 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-run-httpd\") pod \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.015680 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-config-data\") pod \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.015774 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-combined-ca-bundle\") pod \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.015811 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx782\" (UniqueName: \"kubernetes.io/projected/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-kube-api-access-xx782\") pod \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.015842 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-log-httpd\") pod \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.015871 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-scripts\") pod \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\" (UID: \"c38d7aae-1881-4142-a122-f3bdbaf8fbcb\") " Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.020594 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c38d7aae-1881-4142-a122-f3bdbaf8fbcb" (UID: "c38d7aae-1881-4142-a122-f3bdbaf8fbcb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.027459 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-kube-api-access-xx782" (OuterVolumeSpecName: "kube-api-access-xx782") pod "c38d7aae-1881-4142-a122-f3bdbaf8fbcb" (UID: "c38d7aae-1881-4142-a122-f3bdbaf8fbcb"). InnerVolumeSpecName "kube-api-access-xx782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.029964 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-scripts" (OuterVolumeSpecName: "scripts") pod "c38d7aae-1881-4142-a122-f3bdbaf8fbcb" (UID: "c38d7aae-1881-4142-a122-f3bdbaf8fbcb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.038408 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c38d7aae-1881-4142-a122-f3bdbaf8fbcb" (UID: "c38d7aae-1881-4142-a122-f3bdbaf8fbcb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.058239 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c38d7aae-1881-4142-a122-f3bdbaf8fbcb" (UID: "c38d7aae-1881-4142-a122-f3bdbaf8fbcb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.118666 4886 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.118697 4886 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.118707 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx782\" (UniqueName: \"kubernetes.io/projected/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-kube-api-access-xx782\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.118718 4886 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.118728 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.156385 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-config-data" (OuterVolumeSpecName: "config-data") pod "c38d7aae-1881-4142-a122-f3bdbaf8fbcb" (UID: "c38d7aae-1881-4142-a122-f3bdbaf8fbcb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.170365 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c38d7aae-1881-4142-a122-f3bdbaf8fbcb" (UID: "c38d7aae-1881-4142-a122-f3bdbaf8fbcb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.212866 4886 generic.go:334] "Generic (PLEG): container finished" podID="c38d7aae-1881-4142-a122-f3bdbaf8fbcb" containerID="76e28b51db82e8dd635d56284b5ce75115175cd16842335e6012ad0356d11f1c" exitCode=0 Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.212935 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.212976 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38d7aae-1881-4142-a122-f3bdbaf8fbcb","Type":"ContainerDied","Data":"76e28b51db82e8dd635d56284b5ce75115175cd16842335e6012ad0356d11f1c"} Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.213960 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38d7aae-1881-4142-a122-f3bdbaf8fbcb","Type":"ContainerDied","Data":"cd7a39079d4f724acce531d752341f8807d292e759d32e90eb5c35141ce209a7"} Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.213980 4886 scope.go:117] "RemoveContainer" containerID="b430eca17ea48d505d06112f0ca23c77251fda01348c1588551241a92dd83fae" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.238716 4886 scope.go:117] "RemoveContainer" containerID="2286ae9b097737c4a33cea8f51025a196899ac3bc0053aef6509f9f0352189b5" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.242559 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.242594 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c38d7aae-1881-4142-a122-f3bdbaf8fbcb-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.258790 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.268887 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.273154 4886 scope.go:117] "RemoveContainer" containerID="76e28b51db82e8dd635d56284b5ce75115175cd16842335e6012ad0356d11f1c" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.311412 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:21:30 crc kubenswrapper[4886]: E0219 21:21:30.312055 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cfc748-e684-4b36-8c1c-a22a1ebaf84c" containerName="neutron-httpd" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.312076 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cfc748-e684-4b36-8c1c-a22a1ebaf84c" containerName="neutron-httpd" Feb 19 21:21:30 crc kubenswrapper[4886]: E0219 21:21:30.312101 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="336b4fc8-890f-4ace-baa3-587ebc3b27db" containerName="heat-db-sync" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.312109 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="336b4fc8-890f-4ace-baa3-587ebc3b27db" containerName="heat-db-sync" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.312109 4886 scope.go:117] "RemoveContainer" containerID="f7df434bd430e1a2b5656b814f4c72ba8b71906ae3ac9e7c630c8aadab0d9204" Feb 19 21:21:30 crc kubenswrapper[4886]: E0219 21:21:30.312131 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c38d7aae-1881-4142-a122-f3bdbaf8fbcb" containerName="ceilometer-central-agent" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.312140 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c38d7aae-1881-4142-a122-f3bdbaf8fbcb" containerName="ceilometer-central-agent" Feb 19 21:21:30 crc kubenswrapper[4886]: E0219 21:21:30.312150 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="286b0cdb-3a0d-4de1-a288-ccf9494130ef" containerName="init" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.312158 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="286b0cdb-3a0d-4de1-a288-ccf9494130ef" containerName="init" Feb 19 21:21:30 crc kubenswrapper[4886]: E0219 21:21:30.312172 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cfc748-e684-4b36-8c1c-a22a1ebaf84c" containerName="neutron-api" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.312179 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cfc748-e684-4b36-8c1c-a22a1ebaf84c" containerName="neutron-api" Feb 19 21:21:30 crc kubenswrapper[4886]: E0219 21:21:30.312194 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c38d7aae-1881-4142-a122-f3bdbaf8fbcb" containerName="sg-core" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.312205 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c38d7aae-1881-4142-a122-f3bdbaf8fbcb" containerName="sg-core" Feb 19 21:21:30 crc kubenswrapper[4886]: E0219 21:21:30.312216 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9596f93c-54cf-4dfd-b8f7-f6a6c19be356" containerName="init" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.312225 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9596f93c-54cf-4dfd-b8f7-f6a6c19be356" containerName="init" Feb 19 21:21:30 crc kubenswrapper[4886]: E0219 21:21:30.312255 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9596f93c-54cf-4dfd-b8f7-f6a6c19be356" containerName="dnsmasq-dns" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.312284 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9596f93c-54cf-4dfd-b8f7-f6a6c19be356" containerName="dnsmasq-dns" Feb 19 21:21:30 crc kubenswrapper[4886]: E0219 21:21:30.312307 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c38d7aae-1881-4142-a122-f3bdbaf8fbcb" containerName="proxy-httpd" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.312331 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c38d7aae-1881-4142-a122-f3bdbaf8fbcb" containerName="proxy-httpd" Feb 19 21:21:30 crc kubenswrapper[4886]: E0219 21:21:30.312361 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c38d7aae-1881-4142-a122-f3bdbaf8fbcb" containerName="ceilometer-notification-agent" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.312374 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c38d7aae-1881-4142-a122-f3bdbaf8fbcb" containerName="ceilometer-notification-agent" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.312657 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c38d7aae-1881-4142-a122-f3bdbaf8fbcb" containerName="proxy-httpd" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.312677 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="336b4fc8-890f-4ace-baa3-587ebc3b27db" containerName="heat-db-sync" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.312694 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="80cfc748-e684-4b36-8c1c-a22a1ebaf84c" containerName="neutron-api" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.312713 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="9596f93c-54cf-4dfd-b8f7-f6a6c19be356" containerName="dnsmasq-dns" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.312733 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c38d7aae-1881-4142-a122-f3bdbaf8fbcb" containerName="sg-core" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.312752 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="80cfc748-e684-4b36-8c1c-a22a1ebaf84c" containerName="neutron-httpd" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.312766 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="286b0cdb-3a0d-4de1-a288-ccf9494130ef" containerName="init" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.312781 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c38d7aae-1881-4142-a122-f3bdbaf8fbcb" containerName="ceilometer-notification-agent" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.312804 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c38d7aae-1881-4142-a122-f3bdbaf8fbcb" containerName="ceilometer-central-agent" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.328511 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.336275 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.336428 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.365334 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.443836 4886 scope.go:117] "RemoveContainer" containerID="b430eca17ea48d505d06112f0ca23c77251fda01348c1588551241a92dd83fae" Feb 19 21:21:30 crc kubenswrapper[4886]: E0219 21:21:30.445575 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b430eca17ea48d505d06112f0ca23c77251fda01348c1588551241a92dd83fae\": container with ID starting with b430eca17ea48d505d06112f0ca23c77251fda01348c1588551241a92dd83fae not found: ID does not exist" containerID="b430eca17ea48d505d06112f0ca23c77251fda01348c1588551241a92dd83fae" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.445610 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b430eca17ea48d505d06112f0ca23c77251fda01348c1588551241a92dd83fae"} err="failed to get container status \"b430eca17ea48d505d06112f0ca23c77251fda01348c1588551241a92dd83fae\": rpc error: code = NotFound desc = could not find container \"b430eca17ea48d505d06112f0ca23c77251fda01348c1588551241a92dd83fae\": container with ID starting with b430eca17ea48d505d06112f0ca23c77251fda01348c1588551241a92dd83fae not found: ID does not exist" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.445634 4886 scope.go:117] "RemoveContainer" containerID="2286ae9b097737c4a33cea8f51025a196899ac3bc0053aef6509f9f0352189b5" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.445951 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " pod="openstack/ceilometer-0" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.445981 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " pod="openstack/ceilometer-0" Feb 19 21:21:30 crc kubenswrapper[4886]: E0219 21:21:30.446013 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2286ae9b097737c4a33cea8f51025a196899ac3bc0053aef6509f9f0352189b5\": container with ID starting with 2286ae9b097737c4a33cea8f51025a196899ac3bc0053aef6509f9f0352189b5 not found: ID does not exist" containerID="2286ae9b097737c4a33cea8f51025a196899ac3bc0053aef6509f9f0352189b5" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.446046 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2286ae9b097737c4a33cea8f51025a196899ac3bc0053aef6509f9f0352189b5"} err="failed to get container status \"2286ae9b097737c4a33cea8f51025a196899ac3bc0053aef6509f9f0352189b5\": rpc error: code = NotFound desc = could not find container \"2286ae9b097737c4a33cea8f51025a196899ac3bc0053aef6509f9f0352189b5\": container with ID starting with 2286ae9b097737c4a33cea8f51025a196899ac3bc0053aef6509f9f0352189b5 not found: ID does not exist" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.446079 4886 scope.go:117] "RemoveContainer" containerID="76e28b51db82e8dd635d56284b5ce75115175cd16842335e6012ad0356d11f1c" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.446063 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-run-httpd\") pod \"ceilometer-0\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " pod="openstack/ceilometer-0" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.446175 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-config-data\") pod \"ceilometer-0\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " pod="openstack/ceilometer-0" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.446242 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-log-httpd\") pod \"ceilometer-0\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " pod="openstack/ceilometer-0" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.446285 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-scripts\") pod \"ceilometer-0\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " pod="openstack/ceilometer-0" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.446309 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwrll\" (UniqueName: \"kubernetes.io/projected/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-kube-api-access-fwrll\") pod \"ceilometer-0\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " pod="openstack/ceilometer-0" Feb 19 21:21:30 crc kubenswrapper[4886]: E0219 21:21:30.446359 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76e28b51db82e8dd635d56284b5ce75115175cd16842335e6012ad0356d11f1c\": container with ID starting with 76e28b51db82e8dd635d56284b5ce75115175cd16842335e6012ad0356d11f1c not found: ID does not exist" containerID="76e28b51db82e8dd635d56284b5ce75115175cd16842335e6012ad0356d11f1c" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.446381 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76e28b51db82e8dd635d56284b5ce75115175cd16842335e6012ad0356d11f1c"} err="failed to get container status \"76e28b51db82e8dd635d56284b5ce75115175cd16842335e6012ad0356d11f1c\": rpc error: code = NotFound desc = could not find container \"76e28b51db82e8dd635d56284b5ce75115175cd16842335e6012ad0356d11f1c\": container with ID starting with 76e28b51db82e8dd635d56284b5ce75115175cd16842335e6012ad0356d11f1c not found: ID does not exist" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.446401 4886 scope.go:117] "RemoveContainer" containerID="f7df434bd430e1a2b5656b814f4c72ba8b71906ae3ac9e7c630c8aadab0d9204" Feb 19 21:21:30 crc kubenswrapper[4886]: E0219 21:21:30.446600 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7df434bd430e1a2b5656b814f4c72ba8b71906ae3ac9e7c630c8aadab0d9204\": container with ID starting with f7df434bd430e1a2b5656b814f4c72ba8b71906ae3ac9e7c630c8aadab0d9204 not found: ID does not exist" containerID="f7df434bd430e1a2b5656b814f4c72ba8b71906ae3ac9e7c630c8aadab0d9204" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.446622 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7df434bd430e1a2b5656b814f4c72ba8b71906ae3ac9e7c630c8aadab0d9204"} err="failed to get container status \"f7df434bd430e1a2b5656b814f4c72ba8b71906ae3ac9e7c630c8aadab0d9204\": rpc error: code = NotFound desc = could not find container \"f7df434bd430e1a2b5656b814f4c72ba8b71906ae3ac9e7c630c8aadab0d9204\": container with ID starting with f7df434bd430e1a2b5656b814f4c72ba8b71906ae3ac9e7c630c8aadab0d9204 not found: ID does not exist" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.520457 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.549883 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-run-httpd\") pod \"ceilometer-0\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " pod="openstack/ceilometer-0" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.549923 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-config-data\") pod \"ceilometer-0\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " pod="openstack/ceilometer-0" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.549981 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-log-httpd\") pod \"ceilometer-0\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " pod="openstack/ceilometer-0" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.550012 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-scripts\") pod \"ceilometer-0\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " pod="openstack/ceilometer-0" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.550039 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwrll\" (UniqueName: \"kubernetes.io/projected/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-kube-api-access-fwrll\") pod \"ceilometer-0\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " pod="openstack/ceilometer-0" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.550135 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " pod="openstack/ceilometer-0" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.550156 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " pod="openstack/ceilometer-0" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.552185 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-log-httpd\") pod \"ceilometer-0\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " pod="openstack/ceilometer-0" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.552829 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-run-httpd\") pod \"ceilometer-0\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " pod="openstack/ceilometer-0" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.554550 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.557149 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " pod="openstack/ceilometer-0" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.564343 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.570923 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " pod="openstack/ceilometer-0" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.582787 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-scripts\") pod \"ceilometer-0\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " pod="openstack/ceilometer-0" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.596339 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-config-data\") pod \"ceilometer-0\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " pod="openstack/ceilometer-0" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.599842 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwrll\" (UniqueName: \"kubernetes.io/projected/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-kube-api-access-fwrll\") pod \"ceilometer-0\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " pod="openstack/ceilometer-0" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.642300 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80cfc748-e684-4b36-8c1c-a22a1ebaf84c" path="/var/lib/kubelet/pods/80cfc748-e684-4b36-8c1c-a22a1ebaf84c/volumes" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.642977 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9596f93c-54cf-4dfd-b8f7-f6a6c19be356" path="/var/lib/kubelet/pods/9596f93c-54cf-4dfd-b8f7-f6a6c19be356/volumes" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.644486 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c38d7aae-1881-4142-a122-f3bdbaf8fbcb" path="/var/lib/kubelet/pods/c38d7aae-1881-4142-a122-f3bdbaf8fbcb/volumes" Feb 19 21:21:30 crc kubenswrapper[4886]: I0219 21:21:30.749009 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:21:31 crc kubenswrapper[4886]: I0219 21:21:31.250003 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6787dcd584-7jqgr" Feb 19 21:21:31 crc kubenswrapper[4886]: I0219 21:21:31.350069 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:21:32 crc kubenswrapper[4886]: I0219 21:21:32.251924 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f2e33f3-01f6-4765-90e5-19c90fad9f3f","Type":"ContainerStarted","Data":"7af1103a7b349e71a329823dee670a4d4e4aa5a5abfc997879aefbc2ad8cb3d6"} Feb 19 21:21:32 crc kubenswrapper[4886]: I0219 21:21:32.562470 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 21:21:32 crc kubenswrapper[4886]: I0219 21:21:32.613575 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 21:21:32 crc kubenswrapper[4886]: I0219 21:21:32.657745 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5db54896d6-4qqcq" Feb 19 21:21:32 crc kubenswrapper[4886]: I0219 21:21:32.832767 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6669c66696-lw9qb" Feb 19 21:21:32 crc kubenswrapper[4886]: I0219 21:21:32.929122 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6787dcd584-7jqgr"] Feb 19 21:21:32 crc kubenswrapper[4886]: I0219 21:21:32.929387 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6787dcd584-7jqgr" podUID="df8f4be1-2248-46ee-a97e-1121038c5fd8" containerName="barbican-api-log" containerID="cri-o://fd8290c2a749eb58dc11a5147e77eff9b8f6e82876e3a6f4653f336a647dbc15" gracePeriod=30 Feb 19 21:21:32 crc kubenswrapper[4886]: I0219 21:21:32.929522 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6787dcd584-7jqgr" podUID="df8f4be1-2248-46ee-a97e-1121038c5fd8" containerName="barbican-api" containerID="cri-o://4cfcf2730a6cdaa3b35b443f06544160e83d0903278b5ae1e6d201fcdd65ea52" gracePeriod=30 Feb 19 21:21:32 crc kubenswrapper[4886]: I0219 21:21:32.936270 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6787dcd584-7jqgr" podUID="df8f4be1-2248-46ee-a97e-1121038c5fd8" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.200:9311/healthcheck\": EOF" Feb 19 21:21:32 crc kubenswrapper[4886]: I0219 21:21:32.936748 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6787dcd584-7jqgr" podUID="df8f4be1-2248-46ee-a97e-1121038c5fd8" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.200:9311/healthcheck\": EOF" Feb 19 21:21:33 crc kubenswrapper[4886]: I0219 21:21:33.292182 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f2e33f3-01f6-4765-90e5-19c90fad9f3f","Type":"ContainerStarted","Data":"804ac73b07bc46b700d29792ad644d5e4222ccd3f15bf108124e630d2e9cf776"} Feb 19 21:21:33 crc kubenswrapper[4886]: I0219 21:21:33.292533 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f2e33f3-01f6-4765-90e5-19c90fad9f3f","Type":"ContainerStarted","Data":"878f76c2b0a4897568fb283be3402f96c0a9b269b363329d6c10e96fbf271d5e"} Feb 19 21:21:33 crc kubenswrapper[4886]: I0219 21:21:33.294784 4886 generic.go:334] "Generic (PLEG): container finished" podID="df8f4be1-2248-46ee-a97e-1121038c5fd8" containerID="fd8290c2a749eb58dc11a5147e77eff9b8f6e82876e3a6f4653f336a647dbc15" exitCode=143 Feb 19 21:21:33 crc kubenswrapper[4886]: I0219 21:21:33.294871 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6787dcd584-7jqgr" event={"ID":"df8f4be1-2248-46ee-a97e-1121038c5fd8","Type":"ContainerDied","Data":"fd8290c2a749eb58dc11a5147e77eff9b8f6e82876e3a6f4653f336a647dbc15"} Feb 19 21:21:33 crc kubenswrapper[4886]: I0219 21:21:33.294980 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ff1fe0d2-edc0-494f-a9a7-cf568a97929f" containerName="cinder-scheduler" containerID="cri-o://3c5ef0329a03943abf2ff22ea7ebc859f3d5bd711a71073ca623ddd364596031" gracePeriod=30 Feb 19 21:21:33 crc kubenswrapper[4886]: I0219 21:21:33.295068 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ff1fe0d2-edc0-494f-a9a7-cf568a97929f" containerName="probe" containerID="cri-o://e3ccc82201719983389ca04b1439ee8b774cf1a8ed9b0e4599b83b121c286ade" gracePeriod=30 Feb 19 21:21:34 crc kubenswrapper[4886]: I0219 21:21:34.305597 4886 generic.go:334] "Generic (PLEG): container finished" podID="ff1fe0d2-edc0-494f-a9a7-cf568a97929f" containerID="e3ccc82201719983389ca04b1439ee8b774cf1a8ed9b0e4599b83b121c286ade" exitCode=0 Feb 19 21:21:34 crc kubenswrapper[4886]: I0219 21:21:34.305642 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff1fe0d2-edc0-494f-a9a7-cf568a97929f","Type":"ContainerDied","Data":"e3ccc82201719983389ca04b1439ee8b774cf1a8ed9b0e4599b83b121c286ade"} Feb 19 21:21:34 crc kubenswrapper[4886]: I0219 21:21:34.307991 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f2e33f3-01f6-4765-90e5-19c90fad9f3f","Type":"ContainerStarted","Data":"847f583553ec4889ab011ad14a1964b65a9f13b028eb4f3f55e8bfa13d1695c9"} Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.243438 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.323834 4886 generic.go:334] "Generic (PLEG): container finished" podID="ff1fe0d2-edc0-494f-a9a7-cf568a97929f" containerID="3c5ef0329a03943abf2ff22ea7ebc859f3d5bd711a71073ca623ddd364596031" exitCode=0 Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.323876 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff1fe0d2-edc0-494f-a9a7-cf568a97929f","Type":"ContainerDied","Data":"3c5ef0329a03943abf2ff22ea7ebc859f3d5bd711a71073ca623ddd364596031"} Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.323901 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff1fe0d2-edc0-494f-a9a7-cf568a97929f","Type":"ContainerDied","Data":"5cc87649c8aa5d3364865a88ee28400172ac611252af9a4e1780ad469dec86c6"} Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.323918 4886 scope.go:117] "RemoveContainer" containerID="e3ccc82201719983389ca04b1439ee8b774cf1a8ed9b0e4599b83b121c286ade" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.324047 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.359836 4886 scope.go:117] "RemoveContainer" containerID="3c5ef0329a03943abf2ff22ea7ebc859f3d5bd711a71073ca623ddd364596031" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.376412 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-etc-machine-id\") pod \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\" (UID: \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\") " Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.376751 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-combined-ca-bundle\") pod \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\" (UID: \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\") " Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.376542 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ff1fe0d2-edc0-494f-a9a7-cf568a97929f" (UID: "ff1fe0d2-edc0-494f-a9a7-cf568a97929f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.376933 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-config-data-custom\") pod \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\" (UID: \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\") " Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.377071 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-scripts\") pod \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\" (UID: \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\") " Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.377227 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw9jb\" (UniqueName: \"kubernetes.io/projected/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-kube-api-access-zw9jb\") pod \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\" (UID: \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\") " Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.377279 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-config-data\") pod \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\" (UID: \"ff1fe0d2-edc0-494f-a9a7-cf568a97929f\") " Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.378276 4886 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.383230 4886 scope.go:117] "RemoveContainer" containerID="e3ccc82201719983389ca04b1439ee8b774cf1a8ed9b0e4599b83b121c286ade" Feb 19 21:21:35 crc kubenswrapper[4886]: E0219 21:21:35.384500 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3ccc82201719983389ca04b1439ee8b774cf1a8ed9b0e4599b83b121c286ade\": container with ID starting with e3ccc82201719983389ca04b1439ee8b774cf1a8ed9b0e4599b83b121c286ade not found: ID does not exist" containerID="e3ccc82201719983389ca04b1439ee8b774cf1a8ed9b0e4599b83b121c286ade" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.384532 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3ccc82201719983389ca04b1439ee8b774cf1a8ed9b0e4599b83b121c286ade"} err="failed to get container status \"e3ccc82201719983389ca04b1439ee8b774cf1a8ed9b0e4599b83b121c286ade\": rpc error: code = NotFound desc = could not find container \"e3ccc82201719983389ca04b1439ee8b774cf1a8ed9b0e4599b83b121c286ade\": container with ID starting with e3ccc82201719983389ca04b1439ee8b774cf1a8ed9b0e4599b83b121c286ade not found: ID does not exist" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.384555 4886 scope.go:117] "RemoveContainer" containerID="3c5ef0329a03943abf2ff22ea7ebc859f3d5bd711a71073ca623ddd364596031" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.385485 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-kube-api-access-zw9jb" (OuterVolumeSpecName: "kube-api-access-zw9jb") pod "ff1fe0d2-edc0-494f-a9a7-cf568a97929f" (UID: "ff1fe0d2-edc0-494f-a9a7-cf568a97929f"). InnerVolumeSpecName "kube-api-access-zw9jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:21:35 crc kubenswrapper[4886]: E0219 21:21:35.387040 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c5ef0329a03943abf2ff22ea7ebc859f3d5bd711a71073ca623ddd364596031\": container with ID starting with 3c5ef0329a03943abf2ff22ea7ebc859f3d5bd711a71073ca623ddd364596031 not found: ID does not exist" containerID="3c5ef0329a03943abf2ff22ea7ebc859f3d5bd711a71073ca623ddd364596031" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.387072 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c5ef0329a03943abf2ff22ea7ebc859f3d5bd711a71073ca623ddd364596031"} err="failed to get container status \"3c5ef0329a03943abf2ff22ea7ebc859f3d5bd711a71073ca623ddd364596031\": rpc error: code = NotFound desc = could not find container \"3c5ef0329a03943abf2ff22ea7ebc859f3d5bd711a71073ca623ddd364596031\": container with ID starting with 3c5ef0329a03943abf2ff22ea7ebc859f3d5bd711a71073ca623ddd364596031 not found: ID does not exist" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.399492 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-scripts" (OuterVolumeSpecName: "scripts") pod "ff1fe0d2-edc0-494f-a9a7-cf568a97929f" (UID: "ff1fe0d2-edc0-494f-a9a7-cf568a97929f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.407435 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ff1fe0d2-edc0-494f-a9a7-cf568a97929f" (UID: "ff1fe0d2-edc0-494f-a9a7-cf568a97929f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.482410 4886 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.482449 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.482462 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw9jb\" (UniqueName: \"kubernetes.io/projected/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-kube-api-access-zw9jb\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.487420 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff1fe0d2-edc0-494f-a9a7-cf568a97929f" (UID: "ff1fe0d2-edc0-494f-a9a7-cf568a97929f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.557496 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-config-data" (OuterVolumeSpecName: "config-data") pod "ff1fe0d2-edc0-494f-a9a7-cf568a97929f" (UID: "ff1fe0d2-edc0-494f-a9a7-cf568a97929f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.584634 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.584666 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff1fe0d2-edc0-494f-a9a7-cf568a97929f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.700490 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.720740 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.725782 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 21:21:35 crc kubenswrapper[4886]: E0219 21:21:35.726374 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff1fe0d2-edc0-494f-a9a7-cf568a97929f" containerName="cinder-scheduler" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.726399 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1fe0d2-edc0-494f-a9a7-cf568a97929f" containerName="cinder-scheduler" Feb 19 21:21:35 crc kubenswrapper[4886]: E0219 21:21:35.726436 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff1fe0d2-edc0-494f-a9a7-cf568a97929f" containerName="probe" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.726445 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1fe0d2-edc0-494f-a9a7-cf568a97929f" containerName="probe" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.726737 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff1fe0d2-edc0-494f-a9a7-cf568a97929f" containerName="probe" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.726762 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff1fe0d2-edc0-494f-a9a7-cf568a97929f" containerName="cinder-scheduler" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.728171 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.731167 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.746360 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.799309 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c1275e6-3c64-49dc-9aa2-308cda6e4772-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6c1275e6-3c64-49dc-9aa2-308cda6e4772\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.799407 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c1275e6-3c64-49dc-9aa2-308cda6e4772-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6c1275e6-3c64-49dc-9aa2-308cda6e4772\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.799449 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1275e6-3c64-49dc-9aa2-308cda6e4772-config-data\") pod \"cinder-scheduler-0\" (UID: \"6c1275e6-3c64-49dc-9aa2-308cda6e4772\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.799493 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1275e6-3c64-49dc-9aa2-308cda6e4772-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6c1275e6-3c64-49dc-9aa2-308cda6e4772\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.799932 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fhrv\" (UniqueName: \"kubernetes.io/projected/6c1275e6-3c64-49dc-9aa2-308cda6e4772-kube-api-access-8fhrv\") pod \"cinder-scheduler-0\" (UID: \"6c1275e6-3c64-49dc-9aa2-308cda6e4772\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.800112 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c1275e6-3c64-49dc-9aa2-308cda6e4772-scripts\") pod \"cinder-scheduler-0\" (UID: \"6c1275e6-3c64-49dc-9aa2-308cda6e4772\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.902750 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c1275e6-3c64-49dc-9aa2-308cda6e4772-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6c1275e6-3c64-49dc-9aa2-308cda6e4772\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.903039 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1275e6-3c64-49dc-9aa2-308cda6e4772-config-data\") pod \"cinder-scheduler-0\" (UID: \"6c1275e6-3c64-49dc-9aa2-308cda6e4772\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.903176 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1275e6-3c64-49dc-9aa2-308cda6e4772-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6c1275e6-3c64-49dc-9aa2-308cda6e4772\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.903283 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fhrv\" (UniqueName: \"kubernetes.io/projected/6c1275e6-3c64-49dc-9aa2-308cda6e4772-kube-api-access-8fhrv\") pod \"cinder-scheduler-0\" (UID: \"6c1275e6-3c64-49dc-9aa2-308cda6e4772\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.903429 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c1275e6-3c64-49dc-9aa2-308cda6e4772-scripts\") pod \"cinder-scheduler-0\" (UID: \"6c1275e6-3c64-49dc-9aa2-308cda6e4772\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.903584 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c1275e6-3c64-49dc-9aa2-308cda6e4772-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6c1275e6-3c64-49dc-9aa2-308cda6e4772\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.903733 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c1275e6-3c64-49dc-9aa2-308cda6e4772-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6c1275e6-3c64-49dc-9aa2-308cda6e4772\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.906922 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1275e6-3c64-49dc-9aa2-308cda6e4772-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6c1275e6-3c64-49dc-9aa2-308cda6e4772\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.906999 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c1275e6-3c64-49dc-9aa2-308cda6e4772-scripts\") pod \"cinder-scheduler-0\" (UID: \"6c1275e6-3c64-49dc-9aa2-308cda6e4772\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.907338 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c1275e6-3c64-49dc-9aa2-308cda6e4772-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6c1275e6-3c64-49dc-9aa2-308cda6e4772\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.907818 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1275e6-3c64-49dc-9aa2-308cda6e4772-config-data\") pod \"cinder-scheduler-0\" (UID: \"6c1275e6-3c64-49dc-9aa2-308cda6e4772\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:35 crc kubenswrapper[4886]: I0219 21:21:35.919067 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fhrv\" (UniqueName: \"kubernetes.io/projected/6c1275e6-3c64-49dc-9aa2-308cda6e4772-kube-api-access-8fhrv\") pod \"cinder-scheduler-0\" (UID: \"6c1275e6-3c64-49dc-9aa2-308cda6e4772\") " pod="openstack/cinder-scheduler-0" Feb 19 21:21:36 crc kubenswrapper[4886]: I0219 21:21:36.054671 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 21:21:36 crc kubenswrapper[4886]: I0219 21:21:36.363182 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f2e33f3-01f6-4765-90e5-19c90fad9f3f","Type":"ContainerStarted","Data":"bfe4346cc2d63cb88da808abc29df258d13de40f9cb29058e4a0058abe6fb592"} Feb 19 21:21:36 crc kubenswrapper[4886]: I0219 21:21:36.363890 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 21:21:36 crc kubenswrapper[4886]: I0219 21:21:36.414937 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.265392763 podStartE2EDuration="6.414913425s" podCreationTimestamp="2026-02-19 21:21:30 +0000 UTC" firstStartedPulling="2026-02-19 21:21:31.463475225 +0000 UTC m=+1322.091318275" lastFinishedPulling="2026-02-19 21:21:35.612995887 +0000 UTC m=+1326.240838937" observedRunningTime="2026-02-19 21:21:36.38915874 +0000 UTC m=+1327.017001790" watchObservedRunningTime="2026-02-19 21:21:36.414913425 +0000 UTC m=+1327.042756475" Feb 19 21:21:36 crc kubenswrapper[4886]: I0219 21:21:36.612508 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff1fe0d2-edc0-494f-a9a7-cf568a97929f" path="/var/lib/kubelet/pods/ff1fe0d2-edc0-494f-a9a7-cf568a97929f/volumes" Feb 19 21:21:36 crc kubenswrapper[4886]: I0219 21:21:36.751213 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 21:21:37 crc kubenswrapper[4886]: I0219 21:21:37.365043 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6787dcd584-7jqgr" podUID="df8f4be1-2248-46ee-a97e-1121038c5fd8" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.200:9311/healthcheck\": read tcp 10.217.0.2:54094->10.217.0.200:9311: read: connection reset by peer" Feb 19 21:21:37 crc kubenswrapper[4886]: I0219 21:21:37.368015 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6787dcd584-7jqgr" podUID="df8f4be1-2248-46ee-a97e-1121038c5fd8" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.200:9311/healthcheck\": read tcp 10.217.0.2:54096->10.217.0.200:9311: read: connection reset by peer" Feb 19 21:21:37 crc kubenswrapper[4886]: I0219 21:21:37.376176 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6c1275e6-3c64-49dc-9aa2-308cda6e4772","Type":"ContainerStarted","Data":"a7287ad908beb6b5ec5785d234a78f1d003756bd6c20adb334aab390a1cd029a"} Feb 19 21:21:37 crc kubenswrapper[4886]: I0219 21:21:37.759552 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 21:21:37 crc kubenswrapper[4886]: I0219 21:21:37.761423 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 21:21:37 crc kubenswrapper[4886]: I0219 21:21:37.765337 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-kq8l7" Feb 19 21:21:37 crc kubenswrapper[4886]: I0219 21:21:37.765618 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 19 21:21:37 crc kubenswrapper[4886]: I0219 21:21:37.765786 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 19 21:21:37 crc kubenswrapper[4886]: I0219 21:21:37.774595 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 21:21:37 crc kubenswrapper[4886]: I0219 21:21:37.838499 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.203:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 21:21:37 crc kubenswrapper[4886]: I0219 21:21:37.856273 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9103da3a-b4d6-412c-a7f7-4ccc5980e8f6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9103da3a-b4d6-412c-a7f7-4ccc5980e8f6\") " pod="openstack/openstackclient" Feb 19 21:21:37 crc kubenswrapper[4886]: I0219 21:21:37.856407 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92czq\" (UniqueName: \"kubernetes.io/projected/9103da3a-b4d6-412c-a7f7-4ccc5980e8f6-kube-api-access-92czq\") pod \"openstackclient\" (UID: \"9103da3a-b4d6-412c-a7f7-4ccc5980e8f6\") " pod="openstack/openstackclient" Feb 19 21:21:37 crc kubenswrapper[4886]: I0219 21:21:37.856459 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9103da3a-b4d6-412c-a7f7-4ccc5980e8f6-openstack-config-secret\") pod \"openstackclient\" (UID: \"9103da3a-b4d6-412c-a7f7-4ccc5980e8f6\") " pod="openstack/openstackclient" Feb 19 21:21:37 crc kubenswrapper[4886]: I0219 21:21:37.856568 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9103da3a-b4d6-412c-a7f7-4ccc5980e8f6-openstack-config\") pod \"openstackclient\" (UID: \"9103da3a-b4d6-412c-a7f7-4ccc5980e8f6\") " pod="openstack/openstackclient" Feb 19 21:21:37 crc kubenswrapper[4886]: I0219 21:21:37.958354 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9103da3a-b4d6-412c-a7f7-4ccc5980e8f6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9103da3a-b4d6-412c-a7f7-4ccc5980e8f6\") " pod="openstack/openstackclient" Feb 19 21:21:37 crc kubenswrapper[4886]: I0219 21:21:37.958465 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92czq\" (UniqueName: \"kubernetes.io/projected/9103da3a-b4d6-412c-a7f7-4ccc5980e8f6-kube-api-access-92czq\") pod \"openstackclient\" (UID: \"9103da3a-b4d6-412c-a7f7-4ccc5980e8f6\") " pod="openstack/openstackclient" Feb 19 21:21:37 crc kubenswrapper[4886]: I0219 21:21:37.958484 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9103da3a-b4d6-412c-a7f7-4ccc5980e8f6-openstack-config-secret\") pod \"openstackclient\" (UID: \"9103da3a-b4d6-412c-a7f7-4ccc5980e8f6\") " pod="openstack/openstackclient" Feb 19 21:21:37 crc kubenswrapper[4886]: I0219 21:21:37.958535 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9103da3a-b4d6-412c-a7f7-4ccc5980e8f6-openstack-config\") pod \"openstackclient\" (UID: \"9103da3a-b4d6-412c-a7f7-4ccc5980e8f6\") " pod="openstack/openstackclient" Feb 19 21:21:37 crc kubenswrapper[4886]: I0219 21:21:37.959350 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9103da3a-b4d6-412c-a7f7-4ccc5980e8f6-openstack-config\") pod \"openstackclient\" (UID: \"9103da3a-b4d6-412c-a7f7-4ccc5980e8f6\") " pod="openstack/openstackclient" Feb 19 21:21:37 crc kubenswrapper[4886]: I0219 21:21:37.966841 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9103da3a-b4d6-412c-a7f7-4ccc5980e8f6-openstack-config-secret\") pod \"openstackclient\" (UID: \"9103da3a-b4d6-412c-a7f7-4ccc5980e8f6\") " pod="openstack/openstackclient" Feb 19 21:21:37 crc kubenswrapper[4886]: I0219 21:21:37.966982 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9103da3a-b4d6-412c-a7f7-4ccc5980e8f6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9103da3a-b4d6-412c-a7f7-4ccc5980e8f6\") " pod="openstack/openstackclient" Feb 19 21:21:37 crc kubenswrapper[4886]: I0219 21:21:37.989480 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92czq\" (UniqueName: \"kubernetes.io/projected/9103da3a-b4d6-412c-a7f7-4ccc5980e8f6-kube-api-access-92czq\") pod \"openstackclient\" (UID: \"9103da3a-b4d6-412c-a7f7-4ccc5980e8f6\") " pod="openstack/openstackclient" Feb 19 21:21:38 crc kubenswrapper[4886]: I0219 21:21:38.082525 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 21:21:38 crc kubenswrapper[4886]: I0219 21:21:38.400251 4886 generic.go:334] "Generic (PLEG): container finished" podID="df8f4be1-2248-46ee-a97e-1121038c5fd8" containerID="4cfcf2730a6cdaa3b35b443f06544160e83d0903278b5ae1e6d201fcdd65ea52" exitCode=0 Feb 19 21:21:38 crc kubenswrapper[4886]: I0219 21:21:38.400311 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6787dcd584-7jqgr" event={"ID":"df8f4be1-2248-46ee-a97e-1121038c5fd8","Type":"ContainerDied","Data":"4cfcf2730a6cdaa3b35b443f06544160e83d0903278b5ae1e6d201fcdd65ea52"} Feb 19 21:21:38 crc kubenswrapper[4886]: I0219 21:21:38.403341 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6c1275e6-3c64-49dc-9aa2-308cda6e4772","Type":"ContainerStarted","Data":"d2b3990b6c91d3065337867bc53cc174c5b1d6fb9ea442e213cdf3039f807277"} Feb 19 21:21:38 crc kubenswrapper[4886]: I0219 21:21:38.754677 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 21:21:38 crc kubenswrapper[4886]: I0219 21:21:38.966195 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6787dcd584-7jqgr" Feb 19 21:21:39 crc kubenswrapper[4886]: I0219 21:21:39.012985 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df8f4be1-2248-46ee-a97e-1121038c5fd8-config-data\") pod \"df8f4be1-2248-46ee-a97e-1121038c5fd8\" (UID: \"df8f4be1-2248-46ee-a97e-1121038c5fd8\") " Feb 19 21:21:39 crc kubenswrapper[4886]: I0219 21:21:39.013083 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8f4be1-2248-46ee-a97e-1121038c5fd8-combined-ca-bundle\") pod \"df8f4be1-2248-46ee-a97e-1121038c5fd8\" (UID: \"df8f4be1-2248-46ee-a97e-1121038c5fd8\") " Feb 19 21:21:39 crc kubenswrapper[4886]: I0219 21:21:39.013148 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnjnq\" (UniqueName: \"kubernetes.io/projected/df8f4be1-2248-46ee-a97e-1121038c5fd8-kube-api-access-wnjnq\") pod \"df8f4be1-2248-46ee-a97e-1121038c5fd8\" (UID: \"df8f4be1-2248-46ee-a97e-1121038c5fd8\") " Feb 19 21:21:39 crc kubenswrapper[4886]: I0219 21:21:39.013177 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df8f4be1-2248-46ee-a97e-1121038c5fd8-config-data-custom\") pod \"df8f4be1-2248-46ee-a97e-1121038c5fd8\" (UID: \"df8f4be1-2248-46ee-a97e-1121038c5fd8\") " Feb 19 21:21:39 crc kubenswrapper[4886]: I0219 21:21:39.013226 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df8f4be1-2248-46ee-a97e-1121038c5fd8-logs\") pod \"df8f4be1-2248-46ee-a97e-1121038c5fd8\" (UID: \"df8f4be1-2248-46ee-a97e-1121038c5fd8\") " Feb 19 21:21:39 crc kubenswrapper[4886]: I0219 21:21:39.015099 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df8f4be1-2248-46ee-a97e-1121038c5fd8-logs" (OuterVolumeSpecName: "logs") pod "df8f4be1-2248-46ee-a97e-1121038c5fd8" (UID: "df8f4be1-2248-46ee-a97e-1121038c5fd8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:21:39 crc kubenswrapper[4886]: I0219 21:21:39.020475 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8f4be1-2248-46ee-a97e-1121038c5fd8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "df8f4be1-2248-46ee-a97e-1121038c5fd8" (UID: "df8f4be1-2248-46ee-a97e-1121038c5fd8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:39 crc kubenswrapper[4886]: I0219 21:21:39.031479 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df8f4be1-2248-46ee-a97e-1121038c5fd8-kube-api-access-wnjnq" (OuterVolumeSpecName: "kube-api-access-wnjnq") pod "df8f4be1-2248-46ee-a97e-1121038c5fd8" (UID: "df8f4be1-2248-46ee-a97e-1121038c5fd8"). InnerVolumeSpecName "kube-api-access-wnjnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:21:39 crc kubenswrapper[4886]: I0219 21:21:39.066761 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8f4be1-2248-46ee-a97e-1121038c5fd8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df8f4be1-2248-46ee-a97e-1121038c5fd8" (UID: "df8f4be1-2248-46ee-a97e-1121038c5fd8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:39 crc kubenswrapper[4886]: I0219 21:21:39.108049 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8f4be1-2248-46ee-a97e-1121038c5fd8-config-data" (OuterVolumeSpecName: "config-data") pod "df8f4be1-2248-46ee-a97e-1121038c5fd8" (UID: "df8f4be1-2248-46ee-a97e-1121038c5fd8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:39 crc kubenswrapper[4886]: I0219 21:21:39.116228 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df8f4be1-2248-46ee-a97e-1121038c5fd8-logs\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:39 crc kubenswrapper[4886]: I0219 21:21:39.116274 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df8f4be1-2248-46ee-a97e-1121038c5fd8-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:39 crc kubenswrapper[4886]: I0219 21:21:39.116288 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8f4be1-2248-46ee-a97e-1121038c5fd8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:39 crc kubenswrapper[4886]: I0219 21:21:39.116299 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnjnq\" (UniqueName: \"kubernetes.io/projected/df8f4be1-2248-46ee-a97e-1121038c5fd8-kube-api-access-wnjnq\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:39 crc kubenswrapper[4886]: I0219 21:21:39.116310 4886 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df8f4be1-2248-46ee-a97e-1121038c5fd8-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:39 crc kubenswrapper[4886]: I0219 21:21:39.413205 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6787dcd584-7jqgr" event={"ID":"df8f4be1-2248-46ee-a97e-1121038c5fd8","Type":"ContainerDied","Data":"f01d25a265bb33cb5ea41929056791a921c9060fd81731177b07667844aab5a9"} Feb 19 21:21:39 crc kubenswrapper[4886]: I0219 21:21:39.413307 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6787dcd584-7jqgr" Feb 19 21:21:39 crc kubenswrapper[4886]: I0219 21:21:39.414392 4886 scope.go:117] "RemoveContainer" containerID="4cfcf2730a6cdaa3b35b443f06544160e83d0903278b5ae1e6d201fcdd65ea52" Feb 19 21:21:39 crc kubenswrapper[4886]: I0219 21:21:39.414368 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9103da3a-b4d6-412c-a7f7-4ccc5980e8f6","Type":"ContainerStarted","Data":"ebc042208c0fbf191336d42fa78ecf8e5ab31a0fa9780e4db65296916cd5cd76"} Feb 19 21:21:39 crc kubenswrapper[4886]: I0219 21:21:39.423626 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6c1275e6-3c64-49dc-9aa2-308cda6e4772","Type":"ContainerStarted","Data":"18c07fd504d097cdc072e5277540d62b7f990d64b16fb3b84b83a2450890d1c9"} Feb 19 21:21:39 crc kubenswrapper[4886]: I0219 21:21:39.465833 4886 scope.go:117] "RemoveContainer" containerID="fd8290c2a749eb58dc11a5147e77eff9b8f6e82876e3a6f4653f336a647dbc15" Feb 19 21:21:39 crc kubenswrapper[4886]: I0219 21:21:39.480321 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6787dcd584-7jqgr"] Feb 19 21:21:39 crc kubenswrapper[4886]: I0219 21:21:39.500762 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6787dcd584-7jqgr"] Feb 19 21:21:40 crc kubenswrapper[4886]: I0219 21:21:40.479416 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.479400408 podStartE2EDuration="5.479400408s" podCreationTimestamp="2026-02-19 21:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:21:40.474651551 +0000 UTC m=+1331.102494601" watchObservedRunningTime="2026-02-19 21:21:40.479400408 +0000 UTC m=+1331.107243458" Feb 19 21:21:40 crc kubenswrapper[4886]: I0219 21:21:40.615487 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df8f4be1-2248-46ee-a97e-1121038c5fd8" path="/var/lib/kubelet/pods/df8f4be1-2248-46ee-a97e-1121038c5fd8/volumes" Feb 19 21:21:40 crc kubenswrapper[4886]: I0219 21:21:40.871756 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:41 crc kubenswrapper[4886]: I0219 21:21:41.055701 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 21:21:41 crc kubenswrapper[4886]: I0219 21:21:41.239907 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-55ffff6f4b-98277" Feb 19 21:21:41 crc kubenswrapper[4886]: I0219 21:21:41.686852 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.655349 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5c7789bd79-6tsmf"] Feb 19 21:21:42 crc kubenswrapper[4886]: E0219 21:21:42.655922 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df8f4be1-2248-46ee-a97e-1121038c5fd8" containerName="barbican-api-log" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.655948 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="df8f4be1-2248-46ee-a97e-1121038c5fd8" containerName="barbican-api-log" Feb 19 21:21:42 crc kubenswrapper[4886]: E0219 21:21:42.655975 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df8f4be1-2248-46ee-a97e-1121038c5fd8" containerName="barbican-api" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.655983 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="df8f4be1-2248-46ee-a97e-1121038c5fd8" containerName="barbican-api" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.656241 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="df8f4be1-2248-46ee-a97e-1121038c5fd8" containerName="barbican-api-log" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.656289 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="df8f4be1-2248-46ee-a97e-1121038c5fd8" containerName="barbican-api" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.657779 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.659323 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.659928 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.659937 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.688883 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5c7789bd79-6tsmf"] Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.810482 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f77b85f-2936-4a1e-80b3-610a06f7dbe3-run-httpd\") pod \"swift-proxy-5c7789bd79-6tsmf\" (UID: \"3f77b85f-2936-4a1e-80b3-610a06f7dbe3\") " pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.810634 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f77b85f-2936-4a1e-80b3-610a06f7dbe3-public-tls-certs\") pod \"swift-proxy-5c7789bd79-6tsmf\" (UID: \"3f77b85f-2936-4a1e-80b3-610a06f7dbe3\") " pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.810675 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f77b85f-2936-4a1e-80b3-610a06f7dbe3-config-data\") pod \"swift-proxy-5c7789bd79-6tsmf\" (UID: \"3f77b85f-2936-4a1e-80b3-610a06f7dbe3\") " pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.810693 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2bl7\" (UniqueName: \"kubernetes.io/projected/3f77b85f-2936-4a1e-80b3-610a06f7dbe3-kube-api-access-r2bl7\") pod \"swift-proxy-5c7789bd79-6tsmf\" (UID: \"3f77b85f-2936-4a1e-80b3-610a06f7dbe3\") " pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.810731 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f77b85f-2936-4a1e-80b3-610a06f7dbe3-log-httpd\") pod \"swift-proxy-5c7789bd79-6tsmf\" (UID: \"3f77b85f-2936-4a1e-80b3-610a06f7dbe3\") " pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.810767 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f77b85f-2936-4a1e-80b3-610a06f7dbe3-etc-swift\") pod \"swift-proxy-5c7789bd79-6tsmf\" (UID: \"3f77b85f-2936-4a1e-80b3-610a06f7dbe3\") " pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.810786 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f77b85f-2936-4a1e-80b3-610a06f7dbe3-internal-tls-certs\") pod \"swift-proxy-5c7789bd79-6tsmf\" (UID: \"3f77b85f-2936-4a1e-80b3-610a06f7dbe3\") " pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.810804 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f77b85f-2936-4a1e-80b3-610a06f7dbe3-combined-ca-bundle\") pod \"swift-proxy-5c7789bd79-6tsmf\" (UID: \"3f77b85f-2936-4a1e-80b3-610a06f7dbe3\") " pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.818927 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.819201 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f2e33f3-01f6-4765-90e5-19c90fad9f3f" containerName="ceilometer-central-agent" containerID="cri-o://878f76c2b0a4897568fb283be3402f96c0a9b269b363329d6c10e96fbf271d5e" gracePeriod=30 Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.819255 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f2e33f3-01f6-4765-90e5-19c90fad9f3f" containerName="sg-core" containerID="cri-o://847f583553ec4889ab011ad14a1964b65a9f13b028eb4f3f55e8bfa13d1695c9" gracePeriod=30 Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.819279 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f2e33f3-01f6-4765-90e5-19c90fad9f3f" containerName="proxy-httpd" containerID="cri-o://bfe4346cc2d63cb88da808abc29df258d13de40f9cb29058e4a0058abe6fb592" gracePeriod=30 Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.819275 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f2e33f3-01f6-4765-90e5-19c90fad9f3f" containerName="ceilometer-notification-agent" containerID="cri-o://804ac73b07bc46b700d29792ad644d5e4222ccd3f15bf108124e630d2e9cf776" gracePeriod=30 Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.912323 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f77b85f-2936-4a1e-80b3-610a06f7dbe3-etc-swift\") pod \"swift-proxy-5c7789bd79-6tsmf\" (UID: \"3f77b85f-2936-4a1e-80b3-610a06f7dbe3\") " pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.912373 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f77b85f-2936-4a1e-80b3-610a06f7dbe3-internal-tls-certs\") pod \"swift-proxy-5c7789bd79-6tsmf\" (UID: \"3f77b85f-2936-4a1e-80b3-610a06f7dbe3\") " pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.912426 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f77b85f-2936-4a1e-80b3-610a06f7dbe3-combined-ca-bundle\") pod \"swift-proxy-5c7789bd79-6tsmf\" (UID: \"3f77b85f-2936-4a1e-80b3-610a06f7dbe3\") " pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.913332 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f77b85f-2936-4a1e-80b3-610a06f7dbe3-run-httpd\") pod \"swift-proxy-5c7789bd79-6tsmf\" (UID: \"3f77b85f-2936-4a1e-80b3-610a06f7dbe3\") " pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.913638 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f77b85f-2936-4a1e-80b3-610a06f7dbe3-public-tls-certs\") pod \"swift-proxy-5c7789bd79-6tsmf\" (UID: \"3f77b85f-2936-4a1e-80b3-610a06f7dbe3\") " pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.913726 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f77b85f-2936-4a1e-80b3-610a06f7dbe3-config-data\") pod \"swift-proxy-5c7789bd79-6tsmf\" (UID: \"3f77b85f-2936-4a1e-80b3-610a06f7dbe3\") " pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.913752 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2bl7\" (UniqueName: \"kubernetes.io/projected/3f77b85f-2936-4a1e-80b3-610a06f7dbe3-kube-api-access-r2bl7\") pod \"swift-proxy-5c7789bd79-6tsmf\" (UID: \"3f77b85f-2936-4a1e-80b3-610a06f7dbe3\") " pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.913815 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f77b85f-2936-4a1e-80b3-610a06f7dbe3-log-httpd\") pod \"swift-proxy-5c7789bd79-6tsmf\" (UID: \"3f77b85f-2936-4a1e-80b3-610a06f7dbe3\") " pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.915163 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f77b85f-2936-4a1e-80b3-610a06f7dbe3-log-httpd\") pod \"swift-proxy-5c7789bd79-6tsmf\" (UID: \"3f77b85f-2936-4a1e-80b3-610a06f7dbe3\") " pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.916622 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f77b85f-2936-4a1e-80b3-610a06f7dbe3-run-httpd\") pod \"swift-proxy-5c7789bd79-6tsmf\" (UID: \"3f77b85f-2936-4a1e-80b3-610a06f7dbe3\") " pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.918644 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f77b85f-2936-4a1e-80b3-610a06f7dbe3-public-tls-certs\") pod \"swift-proxy-5c7789bd79-6tsmf\" (UID: \"3f77b85f-2936-4a1e-80b3-610a06f7dbe3\") " pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.919403 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f77b85f-2936-4a1e-80b3-610a06f7dbe3-etc-swift\") pod \"swift-proxy-5c7789bd79-6tsmf\" (UID: \"3f77b85f-2936-4a1e-80b3-610a06f7dbe3\") " pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.922043 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f77b85f-2936-4a1e-80b3-610a06f7dbe3-internal-tls-certs\") pod \"swift-proxy-5c7789bd79-6tsmf\" (UID: \"3f77b85f-2936-4a1e-80b3-610a06f7dbe3\") " pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.922930 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f77b85f-2936-4a1e-80b3-610a06f7dbe3-combined-ca-bundle\") pod \"swift-proxy-5c7789bd79-6tsmf\" (UID: \"3f77b85f-2936-4a1e-80b3-610a06f7dbe3\") " pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.924159 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f77b85f-2936-4a1e-80b3-610a06f7dbe3-config-data\") pod \"swift-proxy-5c7789bd79-6tsmf\" (UID: \"3f77b85f-2936-4a1e-80b3-610a06f7dbe3\") " pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.933593 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2bl7\" (UniqueName: \"kubernetes.io/projected/3f77b85f-2936-4a1e-80b3-610a06f7dbe3-kube-api-access-r2bl7\") pod \"swift-proxy-5c7789bd79-6tsmf\" (UID: \"3f77b85f-2936-4a1e-80b3-610a06f7dbe3\") " pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:42 crc kubenswrapper[4886]: I0219 21:21:42.986969 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:43 crc kubenswrapper[4886]: I0219 21:21:43.502943 4886 generic.go:334] "Generic (PLEG): container finished" podID="2f2e33f3-01f6-4765-90e5-19c90fad9f3f" containerID="bfe4346cc2d63cb88da808abc29df258d13de40f9cb29058e4a0058abe6fb592" exitCode=0 Feb 19 21:21:43 crc kubenswrapper[4886]: I0219 21:21:43.503346 4886 generic.go:334] "Generic (PLEG): container finished" podID="2f2e33f3-01f6-4765-90e5-19c90fad9f3f" containerID="847f583553ec4889ab011ad14a1964b65a9f13b028eb4f3f55e8bfa13d1695c9" exitCode=2 Feb 19 21:21:43 crc kubenswrapper[4886]: I0219 21:21:43.503357 4886 generic.go:334] "Generic (PLEG): container finished" podID="2f2e33f3-01f6-4765-90e5-19c90fad9f3f" containerID="804ac73b07bc46b700d29792ad644d5e4222ccd3f15bf108124e630d2e9cf776" exitCode=0 Feb 19 21:21:43 crc kubenswrapper[4886]: I0219 21:21:43.503367 4886 generic.go:334] "Generic (PLEG): container finished" podID="2f2e33f3-01f6-4765-90e5-19c90fad9f3f" containerID="878f76c2b0a4897568fb283be3402f96c0a9b269b363329d6c10e96fbf271d5e" exitCode=0 Feb 19 21:21:43 crc kubenswrapper[4886]: I0219 21:21:43.503387 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f2e33f3-01f6-4765-90e5-19c90fad9f3f","Type":"ContainerDied","Data":"bfe4346cc2d63cb88da808abc29df258d13de40f9cb29058e4a0058abe6fb592"} Feb 19 21:21:43 crc kubenswrapper[4886]: I0219 21:21:43.503435 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f2e33f3-01f6-4765-90e5-19c90fad9f3f","Type":"ContainerDied","Data":"847f583553ec4889ab011ad14a1964b65a9f13b028eb4f3f55e8bfa13d1695c9"} Feb 19 21:21:43 crc kubenswrapper[4886]: I0219 21:21:43.503446 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f2e33f3-01f6-4765-90e5-19c90fad9f3f","Type":"ContainerDied","Data":"804ac73b07bc46b700d29792ad644d5e4222ccd3f15bf108124e630d2e9cf776"} Feb 19 21:21:43 crc kubenswrapper[4886]: I0219 21:21:43.503457 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f2e33f3-01f6-4765-90e5-19c90fad9f3f","Type":"ContainerDied","Data":"878f76c2b0a4897568fb283be3402f96c0a9b269b363329d6c10e96fbf271d5e"} Feb 19 21:21:43 crc kubenswrapper[4886]: I0219 21:21:43.726220 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5c7789bd79-6tsmf"] Feb 19 21:21:43 crc kubenswrapper[4886]: I0219 21:21:43.852251 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.039346 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-config-data\") pod \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.039407 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-scripts\") pod \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.039451 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-sg-core-conf-yaml\") pod \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.039480 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-run-httpd\") pod \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.039512 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-combined-ca-bundle\") pod \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.039546 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwrll\" (UniqueName: \"kubernetes.io/projected/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-kube-api-access-fwrll\") pod \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.039600 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-log-httpd\") pod \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\" (UID: \"2f2e33f3-01f6-4765-90e5-19c90fad9f3f\") " Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.040360 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2f2e33f3-01f6-4765-90e5-19c90fad9f3f" (UID: "2f2e33f3-01f6-4765-90e5-19c90fad9f3f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.040744 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2f2e33f3-01f6-4765-90e5-19c90fad9f3f" (UID: "2f2e33f3-01f6-4765-90e5-19c90fad9f3f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.047018 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-kube-api-access-fwrll" (OuterVolumeSpecName: "kube-api-access-fwrll") pod "2f2e33f3-01f6-4765-90e5-19c90fad9f3f" (UID: "2f2e33f3-01f6-4765-90e5-19c90fad9f3f"). InnerVolumeSpecName "kube-api-access-fwrll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.048394 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-scripts" (OuterVolumeSpecName: "scripts") pod "2f2e33f3-01f6-4765-90e5-19c90fad9f3f" (UID: "2f2e33f3-01f6-4765-90e5-19c90fad9f3f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.081410 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2f2e33f3-01f6-4765-90e5-19c90fad9f3f" (UID: "2f2e33f3-01f6-4765-90e5-19c90fad9f3f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.142178 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwrll\" (UniqueName: \"kubernetes.io/projected/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-kube-api-access-fwrll\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.142623 4886 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.143190 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.143324 4886 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.143423 4886 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.165400 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f2e33f3-01f6-4765-90e5-19c90fad9f3f" (UID: "2f2e33f3-01f6-4765-90e5-19c90fad9f3f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.181463 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-config-data" (OuterVolumeSpecName: "config-data") pod "2f2e33f3-01f6-4765-90e5-19c90fad9f3f" (UID: "2f2e33f3-01f6-4765-90e5-19c90fad9f3f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.245052 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.245088 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f2e33f3-01f6-4765-90e5-19c90fad9f3f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.518551 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f2e33f3-01f6-4765-90e5-19c90fad9f3f","Type":"ContainerDied","Data":"7af1103a7b349e71a329823dee670a4d4e4aa5a5abfc997879aefbc2ad8cb3d6"} Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.518623 4886 scope.go:117] "RemoveContainer" containerID="bfe4346cc2d63cb88da808abc29df258d13de40f9cb29058e4a0058abe6fb592" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.518635 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.525950 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5c7789bd79-6tsmf" event={"ID":"3f77b85f-2936-4a1e-80b3-610a06f7dbe3","Type":"ContainerStarted","Data":"2dd3e9c52663cd31b203d8eb1bb33e8dc81a5dfed57e0cf8c8be27c509de3ed5"} Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.525992 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5c7789bd79-6tsmf" event={"ID":"3f77b85f-2936-4a1e-80b3-610a06f7dbe3","Type":"ContainerStarted","Data":"f7ba2ebf93031ac7cca5e477e211100a626f5f6de0c02e6f51a7dfa2b80bf0e0"} Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.526003 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5c7789bd79-6tsmf" event={"ID":"3f77b85f-2936-4a1e-80b3-610a06f7dbe3","Type":"ContainerStarted","Data":"1d5aa1ebe2eaf06953d27cd134e1caf44a92a8738ffb7942812ef8ee176659e6"} Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.562296 4886 scope.go:117] "RemoveContainer" containerID="847f583553ec4889ab011ad14a1964b65a9f13b028eb4f3f55e8bfa13d1695c9" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.565688 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.579105 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.597536 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:21:44 crc kubenswrapper[4886]: E0219 21:21:44.598063 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f2e33f3-01f6-4765-90e5-19c90fad9f3f" containerName="ceilometer-central-agent" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.598076 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f2e33f3-01f6-4765-90e5-19c90fad9f3f" containerName="ceilometer-central-agent" Feb 19 21:21:44 crc kubenswrapper[4886]: E0219 21:21:44.598091 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f2e33f3-01f6-4765-90e5-19c90fad9f3f" containerName="sg-core" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.598099 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f2e33f3-01f6-4765-90e5-19c90fad9f3f" containerName="sg-core" Feb 19 21:21:44 crc kubenswrapper[4886]: E0219 21:21:44.598121 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f2e33f3-01f6-4765-90e5-19c90fad9f3f" containerName="ceilometer-notification-agent" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.598127 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f2e33f3-01f6-4765-90e5-19c90fad9f3f" containerName="ceilometer-notification-agent" Feb 19 21:21:44 crc kubenswrapper[4886]: E0219 21:21:44.598141 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f2e33f3-01f6-4765-90e5-19c90fad9f3f" containerName="proxy-httpd" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.598147 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f2e33f3-01f6-4765-90e5-19c90fad9f3f" containerName="proxy-httpd" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.598337 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f2e33f3-01f6-4765-90e5-19c90fad9f3f" containerName="ceilometer-notification-agent" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.598358 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f2e33f3-01f6-4765-90e5-19c90fad9f3f" containerName="ceilometer-central-agent" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.598373 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f2e33f3-01f6-4765-90e5-19c90fad9f3f" containerName="proxy-httpd" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.598390 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f2e33f3-01f6-4765-90e5-19c90fad9f3f" containerName="sg-core" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.619467 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.624337 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.624773 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.632473 4886 scope.go:117] "RemoveContainer" containerID="804ac73b07bc46b700d29792ad644d5e4222ccd3f15bf108124e630d2e9cf776" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.665067 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-config-data\") pod \"ceilometer-0\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " pod="openstack/ceilometer-0" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.665167 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-scripts\") pod \"ceilometer-0\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " pod="openstack/ceilometer-0" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.665253 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " pod="openstack/ceilometer-0" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.665417 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-log-httpd\") pod \"ceilometer-0\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " pod="openstack/ceilometer-0" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.665497 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " pod="openstack/ceilometer-0" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.665546 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-run-httpd\") pod \"ceilometer-0\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " pod="openstack/ceilometer-0" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.665607 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9wdb\" (UniqueName: \"kubernetes.io/projected/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-kube-api-access-n9wdb\") pod \"ceilometer-0\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " pod="openstack/ceilometer-0" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.670337 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f2e33f3-01f6-4765-90e5-19c90fad9f3f" path="/var/lib/kubelet/pods/2f2e33f3-01f6-4765-90e5-19c90fad9f3f/volumes" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.671446 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.679845 4886 scope.go:117] "RemoveContainer" containerID="878f76c2b0a4897568fb283be3402f96c0a9b269b363329d6c10e96fbf271d5e" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.767699 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " pod="openstack/ceilometer-0" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.768023 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-run-httpd\") pod \"ceilometer-0\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " pod="openstack/ceilometer-0" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.768069 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9wdb\" (UniqueName: \"kubernetes.io/projected/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-kube-api-access-n9wdb\") pod \"ceilometer-0\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " pod="openstack/ceilometer-0" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.768120 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-config-data\") pod \"ceilometer-0\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " pod="openstack/ceilometer-0" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.768172 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-scripts\") pod \"ceilometer-0\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " pod="openstack/ceilometer-0" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.768219 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " pod="openstack/ceilometer-0" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.768276 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-log-httpd\") pod \"ceilometer-0\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " pod="openstack/ceilometer-0" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.768801 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-log-httpd\") pod \"ceilometer-0\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " pod="openstack/ceilometer-0" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.769029 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-run-httpd\") pod \"ceilometer-0\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " pod="openstack/ceilometer-0" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.775668 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-scripts\") pod \"ceilometer-0\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " pod="openstack/ceilometer-0" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.775777 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " pod="openstack/ceilometer-0" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.776737 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-config-data\") pod \"ceilometer-0\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " pod="openstack/ceilometer-0" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.776776 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " pod="openstack/ceilometer-0" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.789206 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9wdb\" (UniqueName: \"kubernetes.io/projected/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-kube-api-access-n9wdb\") pod \"ceilometer-0\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " pod="openstack/ceilometer-0" Feb 19 21:21:44 crc kubenswrapper[4886]: I0219 21:21:44.951798 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.548212 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.548839 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.588954 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5c7789bd79-6tsmf" podStartSLOduration=3.588925013 podStartE2EDuration="3.588925013s" podCreationTimestamp="2026-02-19 21:21:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:21:45.572554708 +0000 UTC m=+1336.200397758" watchObservedRunningTime="2026-02-19 21:21:45.588925013 +0000 UTC m=+1336.216768083" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.679162 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.714326 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6468c4987f-44r29"] Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.715925 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6468c4987f-44r29" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.718092 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.722964 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.723213 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-qs8bc" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.734083 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6468c4987f-44r29"] Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.748429 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-846f97dcd5-gpczq"] Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.750340 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-846f97dcd5-gpczq" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.753979 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.771305 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-846f97dcd5-gpczq"] Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.806838 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-config-data\") pod \"heat-engine-6468c4987f-44r29\" (UID: \"bf0e9257-4c83-4e36-803a-5b85d9cb5e11\") " pod="openstack/heat-engine-6468c4987f-44r29" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.807198 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2xhg\" (UniqueName: \"kubernetes.io/projected/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-kube-api-access-d2xhg\") pod \"heat-engine-6468c4987f-44r29\" (UID: \"bf0e9257-4c83-4e36-803a-5b85d9cb5e11\") " pod="openstack/heat-engine-6468c4987f-44r29" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.807274 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a55b781-923c-4be6-9a0b-28b6493b93d4-config-data-custom\") pod \"heat-cfnapi-846f97dcd5-gpczq\" (UID: \"1a55b781-923c-4be6-9a0b-28b6493b93d4\") " pod="openstack/heat-cfnapi-846f97dcd5-gpczq" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.807379 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svgv9\" (UniqueName: \"kubernetes.io/projected/1a55b781-923c-4be6-9a0b-28b6493b93d4-kube-api-access-svgv9\") pod \"heat-cfnapi-846f97dcd5-gpczq\" (UID: \"1a55b781-923c-4be6-9a0b-28b6493b93d4\") " pod="openstack/heat-cfnapi-846f97dcd5-gpczq" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.807410 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-combined-ca-bundle\") pod \"heat-engine-6468c4987f-44r29\" (UID: \"bf0e9257-4c83-4e36-803a-5b85d9cb5e11\") " pod="openstack/heat-engine-6468c4987f-44r29" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.807453 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a55b781-923c-4be6-9a0b-28b6493b93d4-config-data\") pod \"heat-cfnapi-846f97dcd5-gpczq\" (UID: \"1a55b781-923c-4be6-9a0b-28b6493b93d4\") " pod="openstack/heat-cfnapi-846f97dcd5-gpczq" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.807542 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-config-data-custom\") pod \"heat-engine-6468c4987f-44r29\" (UID: \"bf0e9257-4c83-4e36-803a-5b85d9cb5e11\") " pod="openstack/heat-engine-6468c4987f-44r29" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.807564 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a55b781-923c-4be6-9a0b-28b6493b93d4-combined-ca-bundle\") pod \"heat-cfnapi-846f97dcd5-gpczq\" (UID: \"1a55b781-923c-4be6-9a0b-28b6493b93d4\") " pod="openstack/heat-cfnapi-846f97dcd5-gpczq" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.851554 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-phdqr"] Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.853950 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.874464 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-phdqr"] Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.897582 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-78d5bd5977-hq7vk"] Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.899903 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78d5bd5977-hq7vk" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.906599 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.909205 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-combined-ca-bundle\") pod \"heat-engine-6468c4987f-44r29\" (UID: \"bf0e9257-4c83-4e36-803a-5b85d9cb5e11\") " pod="openstack/heat-engine-6468c4987f-44r29" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.909324 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a55b781-923c-4be6-9a0b-28b6493b93d4-config-data\") pod \"heat-cfnapi-846f97dcd5-gpczq\" (UID: \"1a55b781-923c-4be6-9a0b-28b6493b93d4\") " pod="openstack/heat-cfnapi-846f97dcd5-gpczq" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.909374 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f710b69f-78df-41a3-84b7-b91560755137-config-data\") pod \"heat-api-78d5bd5977-hq7vk\" (UID: \"f710b69f-78df-41a3-84b7-b91560755137\") " pod="openstack/heat-api-78d5bd5977-hq7vk" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.909405 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f710b69f-78df-41a3-84b7-b91560755137-config-data-custom\") pod \"heat-api-78d5bd5977-hq7vk\" (UID: \"f710b69f-78df-41a3-84b7-b91560755137\") " pod="openstack/heat-api-78d5bd5977-hq7vk" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.909456 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-config-data-custom\") pod \"heat-engine-6468c4987f-44r29\" (UID: \"bf0e9257-4c83-4e36-803a-5b85d9cb5e11\") " pod="openstack/heat-engine-6468c4987f-44r29" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.909474 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a55b781-923c-4be6-9a0b-28b6493b93d4-combined-ca-bundle\") pod \"heat-cfnapi-846f97dcd5-gpczq\" (UID: \"1a55b781-923c-4be6-9a0b-28b6493b93d4\") " pod="openstack/heat-cfnapi-846f97dcd5-gpczq" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.909494 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srpsh\" (UniqueName: \"kubernetes.io/projected/f710b69f-78df-41a3-84b7-b91560755137-kube-api-access-srpsh\") pod \"heat-api-78d5bd5977-hq7vk\" (UID: \"f710b69f-78df-41a3-84b7-b91560755137\") " pod="openstack/heat-api-78d5bd5977-hq7vk" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.911470 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-config-data\") pod \"heat-engine-6468c4987f-44r29\" (UID: \"bf0e9257-4c83-4e36-803a-5b85d9cb5e11\") " pod="openstack/heat-engine-6468c4987f-44r29" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.911522 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2xhg\" (UniqueName: \"kubernetes.io/projected/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-kube-api-access-d2xhg\") pod \"heat-engine-6468c4987f-44r29\" (UID: \"bf0e9257-4c83-4e36-803a-5b85d9cb5e11\") " pod="openstack/heat-engine-6468c4987f-44r29" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.911642 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a55b781-923c-4be6-9a0b-28b6493b93d4-config-data-custom\") pod \"heat-cfnapi-846f97dcd5-gpczq\" (UID: \"1a55b781-923c-4be6-9a0b-28b6493b93d4\") " pod="openstack/heat-cfnapi-846f97dcd5-gpczq" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.911698 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f710b69f-78df-41a3-84b7-b91560755137-combined-ca-bundle\") pod \"heat-api-78d5bd5977-hq7vk\" (UID: \"f710b69f-78df-41a3-84b7-b91560755137\") " pod="openstack/heat-api-78d5bd5977-hq7vk" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.912053 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svgv9\" (UniqueName: \"kubernetes.io/projected/1a55b781-923c-4be6-9a0b-28b6493b93d4-kube-api-access-svgv9\") pod \"heat-cfnapi-846f97dcd5-gpczq\" (UID: \"1a55b781-923c-4be6-9a0b-28b6493b93d4\") " pod="openstack/heat-cfnapi-846f97dcd5-gpczq" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.916219 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-combined-ca-bundle\") pod \"heat-engine-6468c4987f-44r29\" (UID: \"bf0e9257-4c83-4e36-803a-5b85d9cb5e11\") " pod="openstack/heat-engine-6468c4987f-44r29" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.919064 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a55b781-923c-4be6-9a0b-28b6493b93d4-config-data\") pod \"heat-cfnapi-846f97dcd5-gpczq\" (UID: \"1a55b781-923c-4be6-9a0b-28b6493b93d4\") " pod="openstack/heat-cfnapi-846f97dcd5-gpczq" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.922108 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a55b781-923c-4be6-9a0b-28b6493b93d4-combined-ca-bundle\") pod \"heat-cfnapi-846f97dcd5-gpczq\" (UID: \"1a55b781-923c-4be6-9a0b-28b6493b93d4\") " pod="openstack/heat-cfnapi-846f97dcd5-gpczq" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.925464 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a55b781-923c-4be6-9a0b-28b6493b93d4-config-data-custom\") pod \"heat-cfnapi-846f97dcd5-gpczq\" (UID: \"1a55b781-923c-4be6-9a0b-28b6493b93d4\") " pod="openstack/heat-cfnapi-846f97dcd5-gpczq" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.932537 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-config-data-custom\") pod \"heat-engine-6468c4987f-44r29\" (UID: \"bf0e9257-4c83-4e36-803a-5b85d9cb5e11\") " pod="openstack/heat-engine-6468c4987f-44r29" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.937029 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2xhg\" (UniqueName: \"kubernetes.io/projected/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-kube-api-access-d2xhg\") pod \"heat-engine-6468c4987f-44r29\" (UID: \"bf0e9257-4c83-4e36-803a-5b85d9cb5e11\") " pod="openstack/heat-engine-6468c4987f-44r29" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.942828 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svgv9\" (UniqueName: \"kubernetes.io/projected/1a55b781-923c-4be6-9a0b-28b6493b93d4-kube-api-access-svgv9\") pod \"heat-cfnapi-846f97dcd5-gpczq\" (UID: \"1a55b781-923c-4be6-9a0b-28b6493b93d4\") " pod="openstack/heat-cfnapi-846f97dcd5-gpczq" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.958480 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-config-data\") pod \"heat-engine-6468c4987f-44r29\" (UID: \"bf0e9257-4c83-4e36-803a-5b85d9cb5e11\") " pod="openstack/heat-engine-6468c4987f-44r29" Feb 19 21:21:45 crc kubenswrapper[4886]: I0219 21:21:45.968487 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-78d5bd5977-hq7vk"] Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.015387 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-phdqr\" (UID: \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\") " pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.015435 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-phdqr\" (UID: \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\") " pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.015457 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmwt4\" (UniqueName: \"kubernetes.io/projected/7b6f412f-9f3e-466b-b65d-91fe1a38e212-kube-api-access-lmwt4\") pod \"dnsmasq-dns-7756b9d78c-phdqr\" (UID: \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\") " pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.015625 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f710b69f-78df-41a3-84b7-b91560755137-config-data\") pod \"heat-api-78d5bd5977-hq7vk\" (UID: \"f710b69f-78df-41a3-84b7-b91560755137\") " pod="openstack/heat-api-78d5bd5977-hq7vk" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.015682 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f710b69f-78df-41a3-84b7-b91560755137-config-data-custom\") pod \"heat-api-78d5bd5977-hq7vk\" (UID: \"f710b69f-78df-41a3-84b7-b91560755137\") " pod="openstack/heat-api-78d5bd5977-hq7vk" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.015813 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srpsh\" (UniqueName: \"kubernetes.io/projected/f710b69f-78df-41a3-84b7-b91560755137-kube-api-access-srpsh\") pod \"heat-api-78d5bd5977-hq7vk\" (UID: \"f710b69f-78df-41a3-84b7-b91560755137\") " pod="openstack/heat-api-78d5bd5977-hq7vk" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.015863 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-phdqr\" (UID: \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\") " pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.015919 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-config\") pod \"dnsmasq-dns-7756b9d78c-phdqr\" (UID: \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\") " pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.016077 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f710b69f-78df-41a3-84b7-b91560755137-combined-ca-bundle\") pod \"heat-api-78d5bd5977-hq7vk\" (UID: \"f710b69f-78df-41a3-84b7-b91560755137\") " pod="openstack/heat-api-78d5bd5977-hq7vk" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.016188 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-phdqr\" (UID: \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\") " pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.019558 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f710b69f-78df-41a3-84b7-b91560755137-config-data-custom\") pod \"heat-api-78d5bd5977-hq7vk\" (UID: \"f710b69f-78df-41a3-84b7-b91560755137\") " pod="openstack/heat-api-78d5bd5977-hq7vk" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.019633 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f710b69f-78df-41a3-84b7-b91560755137-combined-ca-bundle\") pod \"heat-api-78d5bd5977-hq7vk\" (UID: \"f710b69f-78df-41a3-84b7-b91560755137\") " pod="openstack/heat-api-78d5bd5977-hq7vk" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.024708 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f710b69f-78df-41a3-84b7-b91560755137-config-data\") pod \"heat-api-78d5bd5977-hq7vk\" (UID: \"f710b69f-78df-41a3-84b7-b91560755137\") " pod="openstack/heat-api-78d5bd5977-hq7vk" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.037022 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srpsh\" (UniqueName: \"kubernetes.io/projected/f710b69f-78df-41a3-84b7-b91560755137-kube-api-access-srpsh\") pod \"heat-api-78d5bd5977-hq7vk\" (UID: \"f710b69f-78df-41a3-84b7-b91560755137\") " pod="openstack/heat-api-78d5bd5977-hq7vk" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.102983 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6468c4987f-44r29" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.119903 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-846f97dcd5-gpczq" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.120094 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-phdqr\" (UID: \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\") " pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.120143 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-phdqr\" (UID: \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\") " pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.120170 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmwt4\" (UniqueName: \"kubernetes.io/projected/7b6f412f-9f3e-466b-b65d-91fe1a38e212-kube-api-access-lmwt4\") pod \"dnsmasq-dns-7756b9d78c-phdqr\" (UID: \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\") " pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.120344 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-phdqr\" (UID: \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\") " pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.120378 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-config\") pod \"dnsmasq-dns-7756b9d78c-phdqr\" (UID: \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\") " pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.120458 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-phdqr\" (UID: \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\") " pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.121599 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-phdqr\" (UID: \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\") " pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.122487 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-phdqr\" (UID: \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\") " pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.126067 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-config\") pod \"dnsmasq-dns-7756b9d78c-phdqr\" (UID: \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\") " pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.131499 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-phdqr\" (UID: \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\") " pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.132983 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-phdqr\" (UID: \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\") " pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.145699 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmwt4\" (UniqueName: \"kubernetes.io/projected/7b6f412f-9f3e-466b-b65d-91fe1a38e212-kube-api-access-lmwt4\") pod \"dnsmasq-dns-7756b9d78c-phdqr\" (UID: \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\") " pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.185735 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.331496 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78d5bd5977-hq7vk" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.567634 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58c681f2-3b01-4ab7-8ef5-4da1a59e267b","Type":"ContainerStarted","Data":"8b67a65b797ebbca6c0ac6c218b284e2c2034dca55fbd84c91ec1ae27cf26f6b"} Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.655285 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 21:21:46 crc kubenswrapper[4886]: I0219 21:21:46.933914 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-846f97dcd5-gpczq"] Feb 19 21:21:47 crc kubenswrapper[4886]: I0219 21:21:46.999766 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6468c4987f-44r29"] Feb 19 21:21:47 crc kubenswrapper[4886]: I0219 21:21:47.195527 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-phdqr"] Feb 19 21:21:47 crc kubenswrapper[4886]: I0219 21:21:47.343533 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-78d5bd5977-hq7vk"] Feb 19 21:21:47 crc kubenswrapper[4886]: I0219 21:21:47.582136 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6468c4987f-44r29" event={"ID":"bf0e9257-4c83-4e36-803a-5b85d9cb5e11","Type":"ContainerStarted","Data":"bbc3e8ba189c0315a4442b7a1e8d2fad49846dad99116c45816d751f67e9e235"} Feb 19 21:21:47 crc kubenswrapper[4886]: I0219 21:21:47.582180 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6468c4987f-44r29" event={"ID":"bf0e9257-4c83-4e36-803a-5b85d9cb5e11","Type":"ContainerStarted","Data":"c6f4d4f29904490a59b41df1c07a0c164489a14f434a9ad48a3a386fc7a34908"} Feb 19 21:21:47 crc kubenswrapper[4886]: I0219 21:21:47.582300 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6468c4987f-44r29" Feb 19 21:21:47 crc kubenswrapper[4886]: I0219 21:21:47.583154 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-846f97dcd5-gpczq" event={"ID":"1a55b781-923c-4be6-9a0b-28b6493b93d4","Type":"ContainerStarted","Data":"12b3db2b27e49758c2e8125fad3d5de3f441973f8839447278eb3252f4475bd8"} Feb 19 21:21:47 crc kubenswrapper[4886]: I0219 21:21:47.583955 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78d5bd5977-hq7vk" event={"ID":"f710b69f-78df-41a3-84b7-b91560755137","Type":"ContainerStarted","Data":"d6939056b7ff8ff2c7ce12f7815f8e7b10cd499c9e9a6829f533affbf9c41932"} Feb 19 21:21:47 crc kubenswrapper[4886]: I0219 21:21:47.584992 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58c681f2-3b01-4ab7-8ef5-4da1a59e267b","Type":"ContainerStarted","Data":"8e064cff8f4ab7705b567c5e7be54a95d3a85b9bb76bfbac3285c7a4dbdc167c"} Feb 19 21:21:47 crc kubenswrapper[4886]: I0219 21:21:47.601158 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" event={"ID":"7b6f412f-9f3e-466b-b65d-91fe1a38e212","Type":"ContainerStarted","Data":"58886e3fc4b725226dba8db7b3cc03bcde80ebb6f9a67ba26393f5204a4c36a5"} Feb 19 21:21:47 crc kubenswrapper[4886]: I0219 21:21:47.606080 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6468c4987f-44r29" podStartSLOduration=2.606065755 podStartE2EDuration="2.606065755s" podCreationTimestamp="2026-02-19 21:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:21:47.599119884 +0000 UTC m=+1338.226962934" watchObservedRunningTime="2026-02-19 21:21:47.606065755 +0000 UTC m=+1338.233908805" Feb 19 21:21:48 crc kubenswrapper[4886]: I0219 21:21:48.158545 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-86c8d64dfc-bltj4" Feb 19 21:21:48 crc kubenswrapper[4886]: I0219 21:21:48.325236 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:21:48 crc kubenswrapper[4886]: I0219 21:21:48.325626 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:21:48 crc kubenswrapper[4886]: I0219 21:21:48.325457 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-58f7d46f6b-95bx9"] Feb 19 21:21:48 crc kubenswrapper[4886]: I0219 21:21:48.325868 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 21:21:48 crc kubenswrapper[4886]: I0219 21:21:48.326891 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f25290d2bb432baa17ed9b3c0a6cf51aa88d17f019c0cb80644d5e372ee8ab49"} pod="openshift-machine-config-operator/machine-config-daemon-6stm5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 21:21:48 crc kubenswrapper[4886]: I0219 21:21:48.326954 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" containerID="cri-o://f25290d2bb432baa17ed9b3c0a6cf51aa88d17f019c0cb80644d5e372ee8ab49" gracePeriod=600 Feb 19 21:21:48 crc kubenswrapper[4886]: I0219 21:21:48.327015 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-58f7d46f6b-95bx9" podUID="2b04a1d9-e072-4510-92cf-a0698dd7acd7" containerName="neutron-api" containerID="cri-o://3d4e8633cc2fd86c5b9b2b301ed0802ce18bb3059db2294553022197e3c44e8b" gracePeriod=30 Feb 19 21:21:48 crc kubenswrapper[4886]: I0219 21:21:48.327216 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-58f7d46f6b-95bx9" podUID="2b04a1d9-e072-4510-92cf-a0698dd7acd7" containerName="neutron-httpd" containerID="cri-o://0c9890079ee151c143baf5edccbdf22b3f5baacb3e75321af03acc4f7e1fb033" gracePeriod=30 Feb 19 21:21:48 crc kubenswrapper[4886]: I0219 21:21:48.635240 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58c681f2-3b01-4ab7-8ef5-4da1a59e267b","Type":"ContainerStarted","Data":"1cf2f2e607b7aaff4d991ca6d93915ad51f6b7906c2e495e32fc94a75b3ea9c1"} Feb 19 21:21:48 crc kubenswrapper[4886]: I0219 21:21:48.637587 4886 generic.go:334] "Generic (PLEG): container finished" podID="7b6f412f-9f3e-466b-b65d-91fe1a38e212" containerID="6c554772ab49b7361025247ba1bccdebc39601afbd9b65c4b8a093b554f0079b" exitCode=0 Feb 19 21:21:48 crc kubenswrapper[4886]: I0219 21:21:48.637650 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" event={"ID":"7b6f412f-9f3e-466b-b65d-91fe1a38e212","Type":"ContainerDied","Data":"6c554772ab49b7361025247ba1bccdebc39601afbd9b65c4b8a093b554f0079b"} Feb 19 21:21:48 crc kubenswrapper[4886]: I0219 21:21:48.645729 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerDied","Data":"f25290d2bb432baa17ed9b3c0a6cf51aa88d17f019c0cb80644d5e372ee8ab49"} Feb 19 21:21:48 crc kubenswrapper[4886]: I0219 21:21:48.645778 4886 scope.go:117] "RemoveContainer" containerID="07418cd70dea73874048c57bcddf9f82d5a0a608008d842b73583b9e639a54ec" Feb 19 21:21:48 crc kubenswrapper[4886]: I0219 21:21:48.645688 4886 generic.go:334] "Generic (PLEG): container finished" podID="b096c32d-4192-4529-bc55-b05d09004007" containerID="f25290d2bb432baa17ed9b3c0a6cf51aa88d17f019c0cb80644d5e372ee8ab49" exitCode=0 Feb 19 21:21:49 crc kubenswrapper[4886]: I0219 21:21:49.666211 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" event={"ID":"7b6f412f-9f3e-466b-b65d-91fe1a38e212","Type":"ContainerStarted","Data":"1e61704d5d8073d2137e3517293f27ebf0e154ddea0234e91dffdf3ba5e857cb"} Feb 19 21:21:49 crc kubenswrapper[4886]: I0219 21:21:49.666770 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" Feb 19 21:21:49 crc kubenswrapper[4886]: I0219 21:21:49.672512 4886 generic.go:334] "Generic (PLEG): container finished" podID="2b04a1d9-e072-4510-92cf-a0698dd7acd7" containerID="0c9890079ee151c143baf5edccbdf22b3f5baacb3e75321af03acc4f7e1fb033" exitCode=0 Feb 19 21:21:49 crc kubenswrapper[4886]: I0219 21:21:49.672574 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58f7d46f6b-95bx9" event={"ID":"2b04a1d9-e072-4510-92cf-a0698dd7acd7","Type":"ContainerDied","Data":"0c9890079ee151c143baf5edccbdf22b3f5baacb3e75321af03acc4f7e1fb033"} Feb 19 21:21:49 crc kubenswrapper[4886]: I0219 21:21:49.686621 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerStarted","Data":"3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c"} Feb 19 21:21:49 crc kubenswrapper[4886]: I0219 21:21:49.692358 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" podStartSLOduration=4.692336226 podStartE2EDuration="4.692336226s" podCreationTimestamp="2026-02-19 21:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:21:49.685717783 +0000 UTC m=+1340.313560843" watchObservedRunningTime="2026-02-19 21:21:49.692336226 +0000 UTC m=+1340.320179276" Feb 19 21:21:52 crc kubenswrapper[4886]: I0219 21:21:52.809527 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-76667cbdb5-lcq2d"] Feb 19 21:21:52 crc kubenswrapper[4886]: I0219 21:21:52.817673 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-76667cbdb5-lcq2d" Feb 19 21:21:52 crc kubenswrapper[4886]: I0219 21:21:52.854027 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-689bb999db-sc7pf"] Feb 19 21:21:52 crc kubenswrapper[4886]: I0219 21:21:52.855813 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-689bb999db-sc7pf" Feb 19 21:21:52 crc kubenswrapper[4886]: I0219 21:21:52.875906 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-8459f774f5-9dr6c"] Feb 19 21:21:52 crc kubenswrapper[4886]: I0219 21:21:52.877613 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8459f774f5-9dr6c" Feb 19 21:21:52 crc kubenswrapper[4886]: I0219 21:21:52.892553 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-76667cbdb5-lcq2d"] Feb 19 21:21:52 crc kubenswrapper[4886]: I0219 21:21:52.911883 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-8459f774f5-9dr6c"] Feb 19 21:21:52 crc kubenswrapper[4886]: I0219 21:21:52.943749 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-689bb999db-sc7pf"] Feb 19 21:21:52 crc kubenswrapper[4886]: I0219 21:21:52.972645 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87cb43cc-15f0-4ea9-8731-b664027ff5c9-combined-ca-bundle\") pod \"heat-api-689bb999db-sc7pf\" (UID: \"87cb43cc-15f0-4ea9-8731-b664027ff5c9\") " pod="openstack/heat-api-689bb999db-sc7pf" Feb 19 21:21:52 crc kubenswrapper[4886]: I0219 21:21:52.972703 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aefd494a-9837-494b-8bbe-76f6dfc77f5d-combined-ca-bundle\") pod \"heat-cfnapi-8459f774f5-9dr6c\" (UID: \"aefd494a-9837-494b-8bbe-76f6dfc77f5d\") " pod="openstack/heat-cfnapi-8459f774f5-9dr6c" Feb 19 21:21:52 crc kubenswrapper[4886]: I0219 21:21:52.972736 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87cb43cc-15f0-4ea9-8731-b664027ff5c9-config-data-custom\") pod \"heat-api-689bb999db-sc7pf\" (UID: \"87cb43cc-15f0-4ea9-8731-b664027ff5c9\") " pod="openstack/heat-api-689bb999db-sc7pf" Feb 19 21:21:52 crc kubenswrapper[4886]: I0219 21:21:52.972776 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8v8m\" (UniqueName: \"kubernetes.io/projected/aefd494a-9837-494b-8bbe-76f6dfc77f5d-kube-api-access-t8v8m\") pod \"heat-cfnapi-8459f774f5-9dr6c\" (UID: \"aefd494a-9837-494b-8bbe-76f6dfc77f5d\") " pod="openstack/heat-cfnapi-8459f774f5-9dr6c" Feb 19 21:21:52 crc kubenswrapper[4886]: I0219 21:21:52.972801 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6ms9\" (UniqueName: \"kubernetes.io/projected/87cb43cc-15f0-4ea9-8731-b664027ff5c9-kube-api-access-w6ms9\") pod \"heat-api-689bb999db-sc7pf\" (UID: \"87cb43cc-15f0-4ea9-8731-b664027ff5c9\") " pod="openstack/heat-api-689bb999db-sc7pf" Feb 19 21:21:52 crc kubenswrapper[4886]: I0219 21:21:52.972853 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aefd494a-9837-494b-8bbe-76f6dfc77f5d-config-data\") pod \"heat-cfnapi-8459f774f5-9dr6c\" (UID: \"aefd494a-9837-494b-8bbe-76f6dfc77f5d\") " pod="openstack/heat-cfnapi-8459f774f5-9dr6c" Feb 19 21:21:52 crc kubenswrapper[4886]: I0219 21:21:52.972885 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5304d1d-93a6-42a7-9e2f-e86e83a8e699-config-data-custom\") pod \"heat-engine-76667cbdb5-lcq2d\" (UID: \"c5304d1d-93a6-42a7-9e2f-e86e83a8e699\") " pod="openstack/heat-engine-76667cbdb5-lcq2d" Feb 19 21:21:52 crc kubenswrapper[4886]: I0219 21:21:52.972903 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxzv5\" (UniqueName: \"kubernetes.io/projected/c5304d1d-93a6-42a7-9e2f-e86e83a8e699-kube-api-access-rxzv5\") pod \"heat-engine-76667cbdb5-lcq2d\" (UID: \"c5304d1d-93a6-42a7-9e2f-e86e83a8e699\") " pod="openstack/heat-engine-76667cbdb5-lcq2d" Feb 19 21:21:52 crc kubenswrapper[4886]: I0219 21:21:52.972937 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87cb43cc-15f0-4ea9-8731-b664027ff5c9-config-data\") pod \"heat-api-689bb999db-sc7pf\" (UID: \"87cb43cc-15f0-4ea9-8731-b664027ff5c9\") " pod="openstack/heat-api-689bb999db-sc7pf" Feb 19 21:21:52 crc kubenswrapper[4886]: I0219 21:21:52.972974 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aefd494a-9837-494b-8bbe-76f6dfc77f5d-config-data-custom\") pod \"heat-cfnapi-8459f774f5-9dr6c\" (UID: \"aefd494a-9837-494b-8bbe-76f6dfc77f5d\") " pod="openstack/heat-cfnapi-8459f774f5-9dr6c" Feb 19 21:21:52 crc kubenswrapper[4886]: I0219 21:21:52.972994 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5304d1d-93a6-42a7-9e2f-e86e83a8e699-config-data\") pod \"heat-engine-76667cbdb5-lcq2d\" (UID: \"c5304d1d-93a6-42a7-9e2f-e86e83a8e699\") " pod="openstack/heat-engine-76667cbdb5-lcq2d" Feb 19 21:21:52 crc kubenswrapper[4886]: I0219 21:21:52.973010 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5304d1d-93a6-42a7-9e2f-e86e83a8e699-combined-ca-bundle\") pod \"heat-engine-76667cbdb5-lcq2d\" (UID: \"c5304d1d-93a6-42a7-9e2f-e86e83a8e699\") " pod="openstack/heat-engine-76667cbdb5-lcq2d" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.005893 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.011492 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5c7789bd79-6tsmf" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.082200 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5304d1d-93a6-42a7-9e2f-e86e83a8e699-config-data\") pod \"heat-engine-76667cbdb5-lcq2d\" (UID: \"c5304d1d-93a6-42a7-9e2f-e86e83a8e699\") " pod="openstack/heat-engine-76667cbdb5-lcq2d" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.082277 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5304d1d-93a6-42a7-9e2f-e86e83a8e699-combined-ca-bundle\") pod \"heat-engine-76667cbdb5-lcq2d\" (UID: \"c5304d1d-93a6-42a7-9e2f-e86e83a8e699\") " pod="openstack/heat-engine-76667cbdb5-lcq2d" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.082324 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87cb43cc-15f0-4ea9-8731-b664027ff5c9-combined-ca-bundle\") pod \"heat-api-689bb999db-sc7pf\" (UID: \"87cb43cc-15f0-4ea9-8731-b664027ff5c9\") " pod="openstack/heat-api-689bb999db-sc7pf" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.082394 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aefd494a-9837-494b-8bbe-76f6dfc77f5d-combined-ca-bundle\") pod \"heat-cfnapi-8459f774f5-9dr6c\" (UID: \"aefd494a-9837-494b-8bbe-76f6dfc77f5d\") " pod="openstack/heat-cfnapi-8459f774f5-9dr6c" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.082447 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87cb43cc-15f0-4ea9-8731-b664027ff5c9-config-data-custom\") pod \"heat-api-689bb999db-sc7pf\" (UID: \"87cb43cc-15f0-4ea9-8731-b664027ff5c9\") " pod="openstack/heat-api-689bb999db-sc7pf" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.082526 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8v8m\" (UniqueName: \"kubernetes.io/projected/aefd494a-9837-494b-8bbe-76f6dfc77f5d-kube-api-access-t8v8m\") pod \"heat-cfnapi-8459f774f5-9dr6c\" (UID: \"aefd494a-9837-494b-8bbe-76f6dfc77f5d\") " pod="openstack/heat-cfnapi-8459f774f5-9dr6c" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.082570 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6ms9\" (UniqueName: \"kubernetes.io/projected/87cb43cc-15f0-4ea9-8731-b664027ff5c9-kube-api-access-w6ms9\") pod \"heat-api-689bb999db-sc7pf\" (UID: \"87cb43cc-15f0-4ea9-8731-b664027ff5c9\") " pod="openstack/heat-api-689bb999db-sc7pf" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.082679 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aefd494a-9837-494b-8bbe-76f6dfc77f5d-config-data\") pod \"heat-cfnapi-8459f774f5-9dr6c\" (UID: \"aefd494a-9837-494b-8bbe-76f6dfc77f5d\") " pod="openstack/heat-cfnapi-8459f774f5-9dr6c" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.082892 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5304d1d-93a6-42a7-9e2f-e86e83a8e699-config-data-custom\") pod \"heat-engine-76667cbdb5-lcq2d\" (UID: \"c5304d1d-93a6-42a7-9e2f-e86e83a8e699\") " pod="openstack/heat-engine-76667cbdb5-lcq2d" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.083030 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxzv5\" (UniqueName: \"kubernetes.io/projected/c5304d1d-93a6-42a7-9e2f-e86e83a8e699-kube-api-access-rxzv5\") pod \"heat-engine-76667cbdb5-lcq2d\" (UID: \"c5304d1d-93a6-42a7-9e2f-e86e83a8e699\") " pod="openstack/heat-engine-76667cbdb5-lcq2d" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.083205 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87cb43cc-15f0-4ea9-8731-b664027ff5c9-config-data\") pod \"heat-api-689bb999db-sc7pf\" (UID: \"87cb43cc-15f0-4ea9-8731-b664027ff5c9\") " pod="openstack/heat-api-689bb999db-sc7pf" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.083410 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aefd494a-9837-494b-8bbe-76f6dfc77f5d-config-data-custom\") pod \"heat-cfnapi-8459f774f5-9dr6c\" (UID: \"aefd494a-9837-494b-8bbe-76f6dfc77f5d\") " pod="openstack/heat-cfnapi-8459f774f5-9dr6c" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.106681 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5304d1d-93a6-42a7-9e2f-e86e83a8e699-config-data\") pod \"heat-engine-76667cbdb5-lcq2d\" (UID: \"c5304d1d-93a6-42a7-9e2f-e86e83a8e699\") " pod="openstack/heat-engine-76667cbdb5-lcq2d" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.112809 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aefd494a-9837-494b-8bbe-76f6dfc77f5d-combined-ca-bundle\") pod \"heat-cfnapi-8459f774f5-9dr6c\" (UID: \"aefd494a-9837-494b-8bbe-76f6dfc77f5d\") " pod="openstack/heat-cfnapi-8459f774f5-9dr6c" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.120021 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5304d1d-93a6-42a7-9e2f-e86e83a8e699-config-data-custom\") pod \"heat-engine-76667cbdb5-lcq2d\" (UID: \"c5304d1d-93a6-42a7-9e2f-e86e83a8e699\") " pod="openstack/heat-engine-76667cbdb5-lcq2d" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.136063 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5304d1d-93a6-42a7-9e2f-e86e83a8e699-combined-ca-bundle\") pod \"heat-engine-76667cbdb5-lcq2d\" (UID: \"c5304d1d-93a6-42a7-9e2f-e86e83a8e699\") " pod="openstack/heat-engine-76667cbdb5-lcq2d" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.137100 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aefd494a-9837-494b-8bbe-76f6dfc77f5d-config-data\") pod \"heat-cfnapi-8459f774f5-9dr6c\" (UID: \"aefd494a-9837-494b-8bbe-76f6dfc77f5d\") " pod="openstack/heat-cfnapi-8459f774f5-9dr6c" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.139551 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87cb43cc-15f0-4ea9-8731-b664027ff5c9-config-data\") pod \"heat-api-689bb999db-sc7pf\" (UID: \"87cb43cc-15f0-4ea9-8731-b664027ff5c9\") " pod="openstack/heat-api-689bb999db-sc7pf" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.141353 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87cb43cc-15f0-4ea9-8731-b664027ff5c9-config-data-custom\") pod \"heat-api-689bb999db-sc7pf\" (UID: \"87cb43cc-15f0-4ea9-8731-b664027ff5c9\") " pod="openstack/heat-api-689bb999db-sc7pf" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.149938 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxzv5\" (UniqueName: \"kubernetes.io/projected/c5304d1d-93a6-42a7-9e2f-e86e83a8e699-kube-api-access-rxzv5\") pod \"heat-engine-76667cbdb5-lcq2d\" (UID: \"c5304d1d-93a6-42a7-9e2f-e86e83a8e699\") " pod="openstack/heat-engine-76667cbdb5-lcq2d" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.150727 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aefd494a-9837-494b-8bbe-76f6dfc77f5d-config-data-custom\") pod \"heat-cfnapi-8459f774f5-9dr6c\" (UID: \"aefd494a-9837-494b-8bbe-76f6dfc77f5d\") " pod="openstack/heat-cfnapi-8459f774f5-9dr6c" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.152403 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-76667cbdb5-lcq2d" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.160566 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87cb43cc-15f0-4ea9-8731-b664027ff5c9-combined-ca-bundle\") pod \"heat-api-689bb999db-sc7pf\" (UID: \"87cb43cc-15f0-4ea9-8731-b664027ff5c9\") " pod="openstack/heat-api-689bb999db-sc7pf" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.168480 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8v8m\" (UniqueName: \"kubernetes.io/projected/aefd494a-9837-494b-8bbe-76f6dfc77f5d-kube-api-access-t8v8m\") pod \"heat-cfnapi-8459f774f5-9dr6c\" (UID: \"aefd494a-9837-494b-8bbe-76f6dfc77f5d\") " pod="openstack/heat-cfnapi-8459f774f5-9dr6c" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.191388 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6ms9\" (UniqueName: \"kubernetes.io/projected/87cb43cc-15f0-4ea9-8731-b664027ff5c9-kube-api-access-w6ms9\") pod \"heat-api-689bb999db-sc7pf\" (UID: \"87cb43cc-15f0-4ea9-8731-b664027ff5c9\") " pod="openstack/heat-api-689bb999db-sc7pf" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.192600 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-689bb999db-sc7pf" Feb 19 21:21:53 crc kubenswrapper[4886]: I0219 21:21:53.223789 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8459f774f5-9dr6c" Feb 19 21:21:54 crc kubenswrapper[4886]: I0219 21:21:54.792988 4886 generic.go:334] "Generic (PLEG): container finished" podID="2b04a1d9-e072-4510-92cf-a0698dd7acd7" containerID="3d4e8633cc2fd86c5b9b2b301ed0802ce18bb3059db2294553022197e3c44e8b" exitCode=0 Feb 19 21:21:54 crc kubenswrapper[4886]: I0219 21:21:54.793350 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58f7d46f6b-95bx9" event={"ID":"2b04a1d9-e072-4510-92cf-a0698dd7acd7","Type":"ContainerDied","Data":"3d4e8633cc2fd86c5b9b2b301ed0802ce18bb3059db2294553022197e3c44e8b"} Feb 19 21:21:54 crc kubenswrapper[4886]: I0219 21:21:54.935712 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.392510 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-78d5bd5977-hq7vk"] Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.412345 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6d7bdc66c5-25hmr"] Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.413808 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d7bdc66c5-25hmr" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.418109 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.420684 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.431333 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6d7bdc66c5-25hmr"] Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.476470 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-846f97dcd5-gpczq"] Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.517745 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7b8b45588b-z55k4"] Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.519644 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b8b45588b-z55k4" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.522936 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.523197 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.544335 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7b8b45588b-z55k4"] Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.577987 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-combined-ca-bundle\") pod \"heat-api-6d7bdc66c5-25hmr\" (UID: \"c2dbc6be-5448-4402-b63d-240f82bb94de\") " pod="openstack/heat-api-6d7bdc66c5-25hmr" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.578049 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx6qd\" (UniqueName: \"kubernetes.io/projected/c2dbc6be-5448-4402-b63d-240f82bb94de-kube-api-access-fx6qd\") pod \"heat-api-6d7bdc66c5-25hmr\" (UID: \"c2dbc6be-5448-4402-b63d-240f82bb94de\") " pod="openstack/heat-api-6d7bdc66c5-25hmr" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.578101 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-config-data-custom\") pod \"heat-api-6d7bdc66c5-25hmr\" (UID: \"c2dbc6be-5448-4402-b63d-240f82bb94de\") " pod="openstack/heat-api-6d7bdc66c5-25hmr" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.578194 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-public-tls-certs\") pod \"heat-api-6d7bdc66c5-25hmr\" (UID: \"c2dbc6be-5448-4402-b63d-240f82bb94de\") " pod="openstack/heat-api-6d7bdc66c5-25hmr" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.578223 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-config-data\") pod \"heat-api-6d7bdc66c5-25hmr\" (UID: \"c2dbc6be-5448-4402-b63d-240f82bb94de\") " pod="openstack/heat-api-6d7bdc66c5-25hmr" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.578419 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-internal-tls-certs\") pod \"heat-api-6d7bdc66c5-25hmr\" (UID: \"c2dbc6be-5448-4402-b63d-240f82bb94de\") " pod="openstack/heat-api-6d7bdc66c5-25hmr" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.680480 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-config-data-custom\") pod \"heat-api-6d7bdc66c5-25hmr\" (UID: \"c2dbc6be-5448-4402-b63d-240f82bb94de\") " pod="openstack/heat-api-6d7bdc66c5-25hmr" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.680606 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-combined-ca-bundle\") pod \"heat-cfnapi-7b8b45588b-z55k4\" (UID: \"70bf22da-3158-430f-9485-fdd03fd4b6ff\") " pod="openstack/heat-cfnapi-7b8b45588b-z55k4" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.680640 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9t4n\" (UniqueName: \"kubernetes.io/projected/70bf22da-3158-430f-9485-fdd03fd4b6ff-kube-api-access-p9t4n\") pod \"heat-cfnapi-7b8b45588b-z55k4\" (UID: \"70bf22da-3158-430f-9485-fdd03fd4b6ff\") " pod="openstack/heat-cfnapi-7b8b45588b-z55k4" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.680661 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-public-tls-certs\") pod \"heat-api-6d7bdc66c5-25hmr\" (UID: \"c2dbc6be-5448-4402-b63d-240f82bb94de\") " pod="openstack/heat-api-6d7bdc66c5-25hmr" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.680691 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-internal-tls-certs\") pod \"heat-cfnapi-7b8b45588b-z55k4\" (UID: \"70bf22da-3158-430f-9485-fdd03fd4b6ff\") " pod="openstack/heat-cfnapi-7b8b45588b-z55k4" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.680709 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-config-data\") pod \"heat-api-6d7bdc66c5-25hmr\" (UID: \"c2dbc6be-5448-4402-b63d-240f82bb94de\") " pod="openstack/heat-api-6d7bdc66c5-25hmr" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.680739 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-config-data-custom\") pod \"heat-cfnapi-7b8b45588b-z55k4\" (UID: \"70bf22da-3158-430f-9485-fdd03fd4b6ff\") " pod="openstack/heat-cfnapi-7b8b45588b-z55k4" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.680790 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-config-data\") pod \"heat-cfnapi-7b8b45588b-z55k4\" (UID: \"70bf22da-3158-430f-9485-fdd03fd4b6ff\") " pod="openstack/heat-cfnapi-7b8b45588b-z55k4" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.680841 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-internal-tls-certs\") pod \"heat-api-6d7bdc66c5-25hmr\" (UID: \"c2dbc6be-5448-4402-b63d-240f82bb94de\") " pod="openstack/heat-api-6d7bdc66c5-25hmr" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.680934 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-combined-ca-bundle\") pod \"heat-api-6d7bdc66c5-25hmr\" (UID: \"c2dbc6be-5448-4402-b63d-240f82bb94de\") " pod="openstack/heat-api-6d7bdc66c5-25hmr" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.680975 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx6qd\" (UniqueName: \"kubernetes.io/projected/c2dbc6be-5448-4402-b63d-240f82bb94de-kube-api-access-fx6qd\") pod \"heat-api-6d7bdc66c5-25hmr\" (UID: \"c2dbc6be-5448-4402-b63d-240f82bb94de\") " pod="openstack/heat-api-6d7bdc66c5-25hmr" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.681007 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-public-tls-certs\") pod \"heat-cfnapi-7b8b45588b-z55k4\" (UID: \"70bf22da-3158-430f-9485-fdd03fd4b6ff\") " pod="openstack/heat-cfnapi-7b8b45588b-z55k4" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.688532 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-config-data-custom\") pod \"heat-api-6d7bdc66c5-25hmr\" (UID: \"c2dbc6be-5448-4402-b63d-240f82bb94de\") " pod="openstack/heat-api-6d7bdc66c5-25hmr" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.695051 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-internal-tls-certs\") pod \"heat-api-6d7bdc66c5-25hmr\" (UID: \"c2dbc6be-5448-4402-b63d-240f82bb94de\") " pod="openstack/heat-api-6d7bdc66c5-25hmr" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.695411 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-public-tls-certs\") pod \"heat-api-6d7bdc66c5-25hmr\" (UID: \"c2dbc6be-5448-4402-b63d-240f82bb94de\") " pod="openstack/heat-api-6d7bdc66c5-25hmr" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.695600 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-combined-ca-bundle\") pod \"heat-api-6d7bdc66c5-25hmr\" (UID: \"c2dbc6be-5448-4402-b63d-240f82bb94de\") " pod="openstack/heat-api-6d7bdc66c5-25hmr" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.696010 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-config-data\") pod \"heat-api-6d7bdc66c5-25hmr\" (UID: \"c2dbc6be-5448-4402-b63d-240f82bb94de\") " pod="openstack/heat-api-6d7bdc66c5-25hmr" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.699062 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx6qd\" (UniqueName: \"kubernetes.io/projected/c2dbc6be-5448-4402-b63d-240f82bb94de-kube-api-access-fx6qd\") pod \"heat-api-6d7bdc66c5-25hmr\" (UID: \"c2dbc6be-5448-4402-b63d-240f82bb94de\") " pod="openstack/heat-api-6d7bdc66c5-25hmr" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.743722 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d7bdc66c5-25hmr" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.783137 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-public-tls-certs\") pod \"heat-cfnapi-7b8b45588b-z55k4\" (UID: \"70bf22da-3158-430f-9485-fdd03fd4b6ff\") " pod="openstack/heat-cfnapi-7b8b45588b-z55k4" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.783484 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-combined-ca-bundle\") pod \"heat-cfnapi-7b8b45588b-z55k4\" (UID: \"70bf22da-3158-430f-9485-fdd03fd4b6ff\") " pod="openstack/heat-cfnapi-7b8b45588b-z55k4" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.783505 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9t4n\" (UniqueName: \"kubernetes.io/projected/70bf22da-3158-430f-9485-fdd03fd4b6ff-kube-api-access-p9t4n\") pod \"heat-cfnapi-7b8b45588b-z55k4\" (UID: \"70bf22da-3158-430f-9485-fdd03fd4b6ff\") " pod="openstack/heat-cfnapi-7b8b45588b-z55k4" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.783677 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-internal-tls-certs\") pod \"heat-cfnapi-7b8b45588b-z55k4\" (UID: \"70bf22da-3158-430f-9485-fdd03fd4b6ff\") " pod="openstack/heat-cfnapi-7b8b45588b-z55k4" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.783740 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-config-data-custom\") pod \"heat-cfnapi-7b8b45588b-z55k4\" (UID: \"70bf22da-3158-430f-9485-fdd03fd4b6ff\") " pod="openstack/heat-cfnapi-7b8b45588b-z55k4" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.783805 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-config-data\") pod \"heat-cfnapi-7b8b45588b-z55k4\" (UID: \"70bf22da-3158-430f-9485-fdd03fd4b6ff\") " pod="openstack/heat-cfnapi-7b8b45588b-z55k4" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.789330 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-internal-tls-certs\") pod \"heat-cfnapi-7b8b45588b-z55k4\" (UID: \"70bf22da-3158-430f-9485-fdd03fd4b6ff\") " pod="openstack/heat-cfnapi-7b8b45588b-z55k4" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.789392 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-public-tls-certs\") pod \"heat-cfnapi-7b8b45588b-z55k4\" (UID: \"70bf22da-3158-430f-9485-fdd03fd4b6ff\") " pod="openstack/heat-cfnapi-7b8b45588b-z55k4" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.789397 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-config-data-custom\") pod \"heat-cfnapi-7b8b45588b-z55k4\" (UID: \"70bf22da-3158-430f-9485-fdd03fd4b6ff\") " pod="openstack/heat-cfnapi-7b8b45588b-z55k4" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.790092 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-combined-ca-bundle\") pod \"heat-cfnapi-7b8b45588b-z55k4\" (UID: \"70bf22da-3158-430f-9485-fdd03fd4b6ff\") " pod="openstack/heat-cfnapi-7b8b45588b-z55k4" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.793594 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-config-data\") pod \"heat-cfnapi-7b8b45588b-z55k4\" (UID: \"70bf22da-3158-430f-9485-fdd03fd4b6ff\") " pod="openstack/heat-cfnapi-7b8b45588b-z55k4" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.805774 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9t4n\" (UniqueName: \"kubernetes.io/projected/70bf22da-3158-430f-9485-fdd03fd4b6ff-kube-api-access-p9t4n\") pod \"heat-cfnapi-7b8b45588b-z55k4\" (UID: \"70bf22da-3158-430f-9485-fdd03fd4b6ff\") " pod="openstack/heat-cfnapi-7b8b45588b-z55k4" Feb 19 21:21:55 crc kubenswrapper[4886]: I0219 21:21:55.843981 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b8b45588b-z55k4" Feb 19 21:21:56 crc kubenswrapper[4886]: I0219 21:21:56.188427 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" Feb 19 21:21:56 crc kubenswrapper[4886]: I0219 21:21:56.336479 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-wkgk6"] Feb 19 21:21:56 crc kubenswrapper[4886]: I0219 21:21:56.337060 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" podUID="40edad44-68bc-4fbc-9ceb-881436065e53" containerName="dnsmasq-dns" containerID="cri-o://649464e25b83097625c83746e1b08fffeb5dec3a70bb1c7109d309483a942add" gracePeriod=10 Feb 19 21:21:56 crc kubenswrapper[4886]: W0219 21:21:56.699893 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f2e33f3_01f6_4765_90e5_19c90fad9f3f.slice/crio-7af1103a7b349e71a329823dee670a4d4e4aa5a5abfc997879aefbc2ad8cb3d6 WatchSource:0}: Error finding container 7af1103a7b349e71a329823dee670a4d4e4aa5a5abfc997879aefbc2ad8cb3d6: Status 404 returned error can't find the container with id 7af1103a7b349e71a329823dee670a4d4e4aa5a5abfc997879aefbc2ad8cb3d6 Feb 19 21:21:56 crc kubenswrapper[4886]: W0219 21:21:56.725076 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f2e33f3_01f6_4765_90e5_19c90fad9f3f.slice/crio-878f76c2b0a4897568fb283be3402f96c0a9b269b363329d6c10e96fbf271d5e.scope WatchSource:0}: Error finding container 878f76c2b0a4897568fb283be3402f96c0a9b269b363329d6c10e96fbf271d5e: Status 404 returned error can't find the container with id 878f76c2b0a4897568fb283be3402f96c0a9b269b363329d6c10e96fbf271d5e Feb 19 21:21:56 crc kubenswrapper[4886]: W0219 21:21:56.738971 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f2e33f3_01f6_4765_90e5_19c90fad9f3f.slice/crio-804ac73b07bc46b700d29792ad644d5e4222ccd3f15bf108124e630d2e9cf776.scope WatchSource:0}: Error finding container 804ac73b07bc46b700d29792ad644d5e4222ccd3f15bf108124e630d2e9cf776: Status 404 returned error can't find the container with id 804ac73b07bc46b700d29792ad644d5e4222ccd3f15bf108124e630d2e9cf776 Feb 19 21:21:56 crc kubenswrapper[4886]: W0219 21:21:56.742451 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f2e33f3_01f6_4765_90e5_19c90fad9f3f.slice/crio-847f583553ec4889ab011ad14a1964b65a9f13b028eb4f3f55e8bfa13d1695c9.scope WatchSource:0}: Error finding container 847f583553ec4889ab011ad14a1964b65a9f13b028eb4f3f55e8bfa13d1695c9: Status 404 returned error can't find the container with id 847f583553ec4889ab011ad14a1964b65a9f13b028eb4f3f55e8bfa13d1695c9 Feb 19 21:21:56 crc kubenswrapper[4886]: W0219 21:21:56.744655 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f2e33f3_01f6_4765_90e5_19c90fad9f3f.slice/crio-bfe4346cc2d63cb88da808abc29df258d13de40f9cb29058e4a0058abe6fb592.scope WatchSource:0}: Error finding container bfe4346cc2d63cb88da808abc29df258d13de40f9cb29058e4a0058abe6fb592: Status 404 returned error can't find the container with id bfe4346cc2d63cb88da808abc29df258d13de40f9cb29058e4a0058abe6fb592 Feb 19 21:21:56 crc kubenswrapper[4886]: I0219 21:21:56.844284 4886 generic.go:334] "Generic (PLEG): container finished" podID="9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1" containerID="e20466706196d5e4828283593ca2ccb2364b08ff2aa2b105b1d7546978ba93c7" exitCode=137 Feb 19 21:21:56 crc kubenswrapper[4886]: I0219 21:21:56.844323 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1","Type":"ContainerDied","Data":"e20466706196d5e4828283593ca2ccb2364b08ff2aa2b105b1d7546978ba93c7"} Feb 19 21:21:56 crc kubenswrapper[4886]: E0219 21:21:56.898312 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b04a1d9_e072_4510_92cf_a0698dd7acd7.slice/crio-3d4e8633cc2fd86c5b9b2b301ed0802ce18bb3059db2294553022197e3c44e8b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cd08739_53bf_4b73_a5c9_8d51d2a7f6c1.slice/crio-e20466706196d5e4828283593ca2ccb2364b08ff2aa2b105b1d7546978ba93c7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b04a1d9_e072_4510_92cf_a0698dd7acd7.slice/crio-0c9890079ee151c143baf5edccbdf22b3f5baacb3e75321af03acc4f7e1fb033.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff1fe0d2_edc0_494f_a9a7_cf568a97929f.slice/crio-conmon-e3ccc82201719983389ca04b1439ee8b774cf1a8ed9b0e4599b83b121c286ade.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf8f4be1_2248_46ee_a97e_1121038c5fd8.slice/crio-conmon-4cfcf2730a6cdaa3b35b443f06544160e83d0903278b5ae1e6d201fcdd65ea52.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f2e33f3_01f6_4765_90e5_19c90fad9f3f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf8f4be1_2248_46ee_a97e_1121038c5fd8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40edad44_68bc_4fbc_9ceb_881436065e53.slice/crio-649464e25b83097625c83746e1b08fffeb5dec3a70bb1c7109d309483a942add.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff1fe0d2_edc0_494f_a9a7_cf568a97929f.slice/crio-conmon-3c5ef0329a03943abf2ff22ea7ebc859f3d5bd711a71073ca623ddd364596031.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b04a1d9_e072_4510_92cf_a0698dd7acd7.slice/crio-conmon-3d4e8633cc2fd86c5b9b2b301ed0802ce18bb3059db2294553022197e3c44e8b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf8f4be1_2248_46ee_a97e_1121038c5fd8.slice/crio-4cfcf2730a6cdaa3b35b443f06544160e83d0903278b5ae1e6d201fcdd65ea52.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff1fe0d2_edc0_494f_a9a7_cf568a97929f.slice/crio-3c5ef0329a03943abf2ff22ea7ebc859f3d5bd711a71073ca623ddd364596031.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b04a1d9_e072_4510_92cf_a0698dd7acd7.slice/crio-conmon-0c9890079ee151c143baf5edccbdf22b3f5baacb3e75321af03acc4f7e1fb033.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb096c32d_4192_4529_bc55_b05d09004007.slice/crio-conmon-f25290d2bb432baa17ed9b3c0a6cf51aa88d17f019c0cb80644d5e372ee8ab49.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb096c32d_4192_4529_bc55_b05d09004007.slice/crio-f25290d2bb432baa17ed9b3c0a6cf51aa88d17f019c0cb80644d5e372ee8ab49.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf8f4be1_2248_46ee_a97e_1121038c5fd8.slice/crio-f01d25a265bb33cb5ea41929056791a921c9060fd81731177b07667844aab5a9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff1fe0d2_edc0_494f_a9a7_cf568a97929f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf8f4be1_2248_46ee_a97e_1121038c5fd8.slice/crio-conmon-fd8290c2a749eb58dc11a5147e77eff9b8f6e82876e3a6f4653f336a647dbc15.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf8f4be1_2248_46ee_a97e_1121038c5fd8.slice/crio-fd8290c2a749eb58dc11a5147e77eff9b8f6e82876e3a6f4653f336a647dbc15.scope\": RecentStats: unable to find data in memory cache]" Feb 19 21:21:56 crc kubenswrapper[4886]: E0219 21:21:56.898723 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff1fe0d2_edc0_494f_a9a7_cf568a97929f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff1fe0d2_edc0_494f_a9a7_cf568a97929f.slice/crio-3c5ef0329a03943abf2ff22ea7ebc859f3d5bd711a71073ca623ddd364596031.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf8f4be1_2248_46ee_a97e_1121038c5fd8.slice/crio-conmon-fd8290c2a749eb58dc11a5147e77eff9b8f6e82876e3a6f4653f336a647dbc15.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf8f4be1_2248_46ee_a97e_1121038c5fd8.slice/crio-fd8290c2a749eb58dc11a5147e77eff9b8f6e82876e3a6f4653f336a647dbc15.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff1fe0d2_edc0_494f_a9a7_cf568a97929f.slice/crio-5cc87649c8aa5d3364865a88ee28400172ac611252af9a4e1780ad469dec86c6\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff1fe0d2_edc0_494f_a9a7_cf568a97929f.slice/crio-conmon-e3ccc82201719983389ca04b1439ee8b774cf1a8ed9b0e4599b83b121c286ade.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf8f4be1_2248_46ee_a97e_1121038c5fd8.slice/crio-conmon-4cfcf2730a6cdaa3b35b443f06544160e83d0903278b5ae1e6d201fcdd65ea52.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b04a1d9_e072_4510_92cf_a0698dd7acd7.slice/crio-conmon-3d4e8633cc2fd86c5b9b2b301ed0802ce18bb3059db2294553022197e3c44e8b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b04a1d9_e072_4510_92cf_a0698dd7acd7.slice/crio-conmon-0c9890079ee151c143baf5edccbdf22b3f5baacb3e75321af03acc4f7e1fb033.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cd08739_53bf_4b73_a5c9_8d51d2a7f6c1.slice/crio-conmon-e20466706196d5e4828283593ca2ccb2364b08ff2aa2b105b1d7546978ba93c7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b04a1d9_e072_4510_92cf_a0698dd7acd7.slice/crio-3d4e8633cc2fd86c5b9b2b301ed0802ce18bb3059db2294553022197e3c44e8b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40edad44_68bc_4fbc_9ceb_881436065e53.slice/crio-649464e25b83097625c83746e1b08fffeb5dec3a70bb1c7109d309483a942add.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff1fe0d2_edc0_494f_a9a7_cf568a97929f.slice/crio-e3ccc82201719983389ca04b1439ee8b774cf1a8ed9b0e4599b83b121c286ade.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f2e33f3_01f6_4765_90e5_19c90fad9f3f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b04a1d9_e072_4510_92cf_a0698dd7acd7.slice/crio-0c9890079ee151c143baf5edccbdf22b3f5baacb3e75321af03acc4f7e1fb033.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff1fe0d2_edc0_494f_a9a7_cf568a97929f.slice/crio-conmon-3c5ef0329a03943abf2ff22ea7ebc859f3d5bd711a71073ca623ddd364596031.scope\": RecentStats: unable to find data in memory cache]" Feb 19 21:21:56 crc kubenswrapper[4886]: E0219 21:21:56.899147 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf8f4be1_2248_46ee_a97e_1121038c5fd8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f2e33f3_01f6_4765_90e5_19c90fad9f3f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b04a1d9_e072_4510_92cf_a0698dd7acd7.slice/crio-0c9890079ee151c143baf5edccbdf22b3f5baacb3e75321af03acc4f7e1fb033.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff1fe0d2_edc0_494f_a9a7_cf568a97929f.slice/crio-conmon-e3ccc82201719983389ca04b1439ee8b774cf1a8ed9b0e4599b83b121c286ade.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf8f4be1_2248_46ee_a97e_1121038c5fd8.slice/crio-f01d25a265bb33cb5ea41929056791a921c9060fd81731177b07667844aab5a9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b04a1d9_e072_4510_92cf_a0698dd7acd7.slice/crio-conmon-0c9890079ee151c143baf5edccbdf22b3f5baacb3e75321af03acc4f7e1fb033.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf8f4be1_2248_46ee_a97e_1121038c5fd8.slice/crio-4cfcf2730a6cdaa3b35b443f06544160e83d0903278b5ae1e6d201fcdd65ea52.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf8f4be1_2248_46ee_a97e_1121038c5fd8.slice/crio-fd8290c2a749eb58dc11a5147e77eff9b8f6e82876e3a6f4653f336a647dbc15.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cd08739_53bf_4b73_a5c9_8d51d2a7f6c1.slice/crio-e20466706196d5e4828283593ca2ccb2364b08ff2aa2b105b1d7546978ba93c7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff1fe0d2_edc0_494f_a9a7_cf568a97929f.slice/crio-e3ccc82201719983389ca04b1439ee8b774cf1a8ed9b0e4599b83b121c286ade.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b04a1d9_e072_4510_92cf_a0698dd7acd7.slice/crio-3d4e8633cc2fd86c5b9b2b301ed0802ce18bb3059db2294553022197e3c44e8b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff1fe0d2_edc0_494f_a9a7_cf568a97929f.slice/crio-3c5ef0329a03943abf2ff22ea7ebc859f3d5bd711a71073ca623ddd364596031.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff1fe0d2_edc0_494f_a9a7_cf568a97929f.slice/crio-5cc87649c8aa5d3364865a88ee28400172ac611252af9a4e1780ad469dec86c6\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf8f4be1_2248_46ee_a97e_1121038c5fd8.slice/crio-conmon-4cfcf2730a6cdaa3b35b443f06544160e83d0903278b5ae1e6d201fcdd65ea52.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb096c32d_4192_4529_bc55_b05d09004007.slice/crio-conmon-f25290d2bb432baa17ed9b3c0a6cf51aa88d17f019c0cb80644d5e372ee8ab49.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf8f4be1_2248_46ee_a97e_1121038c5fd8.slice/crio-conmon-fd8290c2a749eb58dc11a5147e77eff9b8f6e82876e3a6f4653f336a647dbc15.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40edad44_68bc_4fbc_9ceb_881436065e53.slice/crio-649464e25b83097625c83746e1b08fffeb5dec3a70bb1c7109d309483a942add.scope\": RecentStats: unable to find data in memory cache]" Feb 19 21:21:56 crc kubenswrapper[4886]: E0219 21:21:56.902178 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cd08739_53bf_4b73_a5c9_8d51d2a7f6c1.slice/crio-e20466706196d5e4828283593ca2ccb2364b08ff2aa2b105b1d7546978ba93c7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb096c32d_4192_4529_bc55_b05d09004007.slice/crio-f25290d2bb432baa17ed9b3c0a6cf51aa88d17f019c0cb80644d5e372ee8ab49.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40edad44_68bc_4fbc_9ceb_881436065e53.slice/crio-649464e25b83097625c83746e1b08fffeb5dec3a70bb1c7109d309483a942add.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb096c32d_4192_4529_bc55_b05d09004007.slice/crio-conmon-f25290d2bb432baa17ed9b3c0a6cf51aa88d17f019c0cb80644d5e372ee8ab49.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b04a1d9_e072_4510_92cf_a0698dd7acd7.slice/crio-0c9890079ee151c143baf5edccbdf22b3f5baacb3e75321af03acc4f7e1fb033.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b04a1d9_e072_4510_92cf_a0698dd7acd7.slice/crio-3d4e8633cc2fd86c5b9b2b301ed0802ce18bb3059db2294553022197e3c44e8b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b04a1d9_e072_4510_92cf_a0698dd7acd7.slice/crio-conmon-3d4e8633cc2fd86c5b9b2b301ed0802ce18bb3059db2294553022197e3c44e8b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b04a1d9_e072_4510_92cf_a0698dd7acd7.slice/crio-conmon-0c9890079ee151c143baf5edccbdf22b3f5baacb3e75321af03acc4f7e1fb033.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cd08739_53bf_4b73_a5c9_8d51d2a7f6c1.slice/crio-conmon-e20466706196d5e4828283593ca2ccb2364b08ff2aa2b105b1d7546978ba93c7.scope\": RecentStats: unable to find data in memory cache]" Feb 19 21:21:57 crc kubenswrapper[4886]: I0219 21:21:57.622254 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" podUID="40edad44-68bc-4fbc-9ceb-881436065e53" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.202:5353: connect: connection refused" Feb 19 21:21:57 crc kubenswrapper[4886]: I0219 21:21:57.798182 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.203:8776/healthcheck\": dial tcp 10.217.0.203:8776: connect: connection refused" Feb 19 21:21:57 crc kubenswrapper[4886]: I0219 21:21:57.866820 4886 generic.go:334] "Generic (PLEG): container finished" podID="40edad44-68bc-4fbc-9ceb-881436065e53" containerID="649464e25b83097625c83746e1b08fffeb5dec3a70bb1c7109d309483a942add" exitCode=0 Feb 19 21:21:57 crc kubenswrapper[4886]: I0219 21:21:57.866861 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" event={"ID":"40edad44-68bc-4fbc-9ceb-881436065e53","Type":"ContainerDied","Data":"649464e25b83097625c83746e1b08fffeb5dec3a70bb1c7109d309483a942add"} Feb 19 21:21:59 crc kubenswrapper[4886]: E0219 21:21:59.379408 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Feb 19 21:21:59 crc kubenswrapper[4886]: E0219 21:21:59.381306 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5cch566h66bh66bh64h555h59ch548h646hcfh644h55dhb9h4h65h557h64h55dh5b6h645hfdhdfh666h576h9bhb6h678h664h678h66bhcbhf9q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-92czq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(9103da3a-b4d6-412c-a7f7-4ccc5980e8f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:21:59 crc kubenswrapper[4886]: E0219 21:21:59.383345 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="9103da3a-b4d6-412c-a7f7-4ccc5980e8f6" Feb 19 21:21:59 crc kubenswrapper[4886]: E0219 21:21:59.890421 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="9103da3a-b4d6-412c-a7f7-4ccc5980e8f6" Feb 19 21:22:00 crc kubenswrapper[4886]: E0219 21:22:00.077576 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-api:current-podified" Feb 19 21:22:00 crc kubenswrapper[4886]: E0219 21:22:00.078191 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-api,Image:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_httpd_setup && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n678h577hd8hcbhc8h55bhc6h695h576h656h599h5b6h5f5h96h88h5b8h59fh54h5cdh75h5bdhd6hfh5bbh5dbh598h5ch54h9fh549h5ffh5c6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:heat-api-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-custom,ReadOnly:true,MountPath:/etc/heat/heat.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srpsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck,Port:{0 8004 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:10,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck,Port:{0 8004 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:10,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-api-78d5bd5977-hq7vk_openstack(f710b69f-78df-41a3-84b7-b91560755137): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:22:00 crc kubenswrapper[4886]: E0219 21:22:00.080106 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-api-78d5bd5977-hq7vk" podUID="f710b69f-78df-41a3-84b7-b91560755137" Feb 19 21:22:00 crc kubenswrapper[4886]: E0219 21:22:00.785012 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified" Feb 19 21:22:00 crc kubenswrapper[4886]: E0219 21:22:00.785829 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-cfnapi,Image:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_httpd_setup && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c7h5bh55h599h65ch66chfch68ch8bh64ch65h5f8h5dch654h59bh68dh557h59ch5cch695h595h68fh549h5bbh687h658hb7h5f4h678h678h5ffh557q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:heat-cfnapi-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-custom,ReadOnly:true,MountPath:/etc/heat/heat.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-svgv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:10,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:10,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-cfnapi-846f97dcd5-gpczq_openstack(1a55b781-923c-4be6-9a0b-28b6493b93d4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:22:00 crc kubenswrapper[4886]: E0219 21:22:00.786985 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-cfnapi-846f97dcd5-gpczq" podUID="1a55b781-923c-4be6-9a0b-28b6493b93d4" Feb 19 21:22:00 crc kubenswrapper[4886]: I0219 21:22:00.921217 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" event={"ID":"40edad44-68bc-4fbc-9ceb-881436065e53","Type":"ContainerDied","Data":"867dfc1a79d4e129a658a36a6fa6a4027185e6370f328279b87f417a636058cf"} Feb 19 21:22:00 crc kubenswrapper[4886]: I0219 21:22:00.921529 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="867dfc1a79d4e129a658a36a6fa6a4027185e6370f328279b87f417a636058cf" Feb 19 21:22:00 crc kubenswrapper[4886]: I0219 21:22:00.946335 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1","Type":"ContainerDied","Data":"981e9bc4aeaa342da12104e807c3c68f1882862555566b26498982f0fe0e2ab2"} Feb 19 21:22:00 crc kubenswrapper[4886]: I0219 21:22:00.946380 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="981e9bc4aeaa342da12104e807c3c68f1882862555566b26498982f0fe0e2ab2" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.208850 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.231444 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.337863 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-dns-svc\") pod \"40edad44-68bc-4fbc-9ceb-881436065e53\" (UID: \"40edad44-68bc-4fbc-9ceb-881436065e53\") " Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.337896 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-config-data\") pod \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.337937 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-ovsdbserver-sb\") pod \"40edad44-68bc-4fbc-9ceb-881436065e53\" (UID: \"40edad44-68bc-4fbc-9ceb-881436065e53\") " Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.337963 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-ovsdbserver-nb\") pod \"40edad44-68bc-4fbc-9ceb-881436065e53\" (UID: \"40edad44-68bc-4fbc-9ceb-881436065e53\") " Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.337999 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-logs\") pod \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.338018 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmkf4\" (UniqueName: \"kubernetes.io/projected/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-kube-api-access-mmkf4\") pod \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.338047 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-combined-ca-bundle\") pod \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.338072 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-config\") pod \"40edad44-68bc-4fbc-9ceb-881436065e53\" (UID: \"40edad44-68bc-4fbc-9ceb-881436065e53\") " Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.338127 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-dns-swift-storage-0\") pod \"40edad44-68bc-4fbc-9ceb-881436065e53\" (UID: \"40edad44-68bc-4fbc-9ceb-881436065e53\") " Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.338147 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-etc-machine-id\") pod \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.338561 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-scripts\") pod \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.338602 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2snj5\" (UniqueName: \"kubernetes.io/projected/40edad44-68bc-4fbc-9ceb-881436065e53-kube-api-access-2snj5\") pod \"40edad44-68bc-4fbc-9ceb-881436065e53\" (UID: \"40edad44-68bc-4fbc-9ceb-881436065e53\") " Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.338629 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-config-data-custom\") pod \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\" (UID: \"9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1\") " Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.344666 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-logs" (OuterVolumeSpecName: "logs") pod "9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1" (UID: "9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.346388 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1" (UID: "9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.347529 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1" (UID: "9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.369058 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-kube-api-access-mmkf4" (OuterVolumeSpecName: "kube-api-access-mmkf4") pod "9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1" (UID: "9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1"). InnerVolumeSpecName "kube-api-access-mmkf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.411012 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40edad44-68bc-4fbc-9ceb-881436065e53-kube-api-access-2snj5" (OuterVolumeSpecName: "kube-api-access-2snj5") pod "40edad44-68bc-4fbc-9ceb-881436065e53" (UID: "40edad44-68bc-4fbc-9ceb-881436065e53"). InnerVolumeSpecName "kube-api-access-2snj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.414086 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-scripts" (OuterVolumeSpecName: "scripts") pod "9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1" (UID: "9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.441857 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.441889 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2snj5\" (UniqueName: \"kubernetes.io/projected/40edad44-68bc-4fbc-9ceb-881436065e53-kube-api-access-2snj5\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.441902 4886 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.441910 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-logs\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.441918 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmkf4\" (UniqueName: \"kubernetes.io/projected/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-kube-api-access-mmkf4\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.441926 4886 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.466730 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1" (UID: "9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.548964 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "40edad44-68bc-4fbc-9ceb-881436065e53" (UID: "40edad44-68bc-4fbc-9ceb-881436065e53"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.557651 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-ovsdbserver-nb\") pod \"40edad44-68bc-4fbc-9ceb-881436065e53\" (UID: \"40edad44-68bc-4fbc-9ceb-881436065e53\") " Feb 19 21:22:01 crc kubenswrapper[4886]: W0219 21:22:01.558059 4886 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/40edad44-68bc-4fbc-9ceb-881436065e53/volumes/kubernetes.io~configmap/ovsdbserver-nb Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.558083 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "40edad44-68bc-4fbc-9ceb-881436065e53" (UID: "40edad44-68bc-4fbc-9ceb-881436065e53"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.563146 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "40edad44-68bc-4fbc-9ceb-881436065e53" (UID: "40edad44-68bc-4fbc-9ceb-881436065e53"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.571441 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.571474 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.571830 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.586865 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "40edad44-68bc-4fbc-9ceb-881436065e53" (UID: "40edad44-68bc-4fbc-9ceb-881436065e53"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.587053 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-config" (OuterVolumeSpecName: "config") pod "40edad44-68bc-4fbc-9ceb-881436065e53" (UID: "40edad44-68bc-4fbc-9ceb-881436065e53"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.589719 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "40edad44-68bc-4fbc-9ceb-881436065e53" (UID: "40edad44-68bc-4fbc-9ceb-881436065e53"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.601384 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-config-data" (OuterVolumeSpecName: "config-data") pod "9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1" (UID: "9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.679196 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.679233 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.679244 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.679253 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40edad44-68bc-4fbc-9ceb-881436065e53-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.763555 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-846f97dcd5-gpczq" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.782573 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78d5bd5977-hq7vk" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.885999 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f710b69f-78df-41a3-84b7-b91560755137-config-data-custom\") pod \"f710b69f-78df-41a3-84b7-b91560755137\" (UID: \"f710b69f-78df-41a3-84b7-b91560755137\") " Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.886058 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a55b781-923c-4be6-9a0b-28b6493b93d4-combined-ca-bundle\") pod \"1a55b781-923c-4be6-9a0b-28b6493b93d4\" (UID: \"1a55b781-923c-4be6-9a0b-28b6493b93d4\") " Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.886178 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srpsh\" (UniqueName: \"kubernetes.io/projected/f710b69f-78df-41a3-84b7-b91560755137-kube-api-access-srpsh\") pod \"f710b69f-78df-41a3-84b7-b91560755137\" (UID: \"f710b69f-78df-41a3-84b7-b91560755137\") " Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.886279 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f710b69f-78df-41a3-84b7-b91560755137-combined-ca-bundle\") pod \"f710b69f-78df-41a3-84b7-b91560755137\" (UID: \"f710b69f-78df-41a3-84b7-b91560755137\") " Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.886415 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a55b781-923c-4be6-9a0b-28b6493b93d4-config-data-custom\") pod \"1a55b781-923c-4be6-9a0b-28b6493b93d4\" (UID: \"1a55b781-923c-4be6-9a0b-28b6493b93d4\") " Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.886438 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a55b781-923c-4be6-9a0b-28b6493b93d4-config-data\") pod \"1a55b781-923c-4be6-9a0b-28b6493b93d4\" (UID: \"1a55b781-923c-4be6-9a0b-28b6493b93d4\") " Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.886465 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f710b69f-78df-41a3-84b7-b91560755137-config-data\") pod \"f710b69f-78df-41a3-84b7-b91560755137\" (UID: \"f710b69f-78df-41a3-84b7-b91560755137\") " Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.886498 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svgv9\" (UniqueName: \"kubernetes.io/projected/1a55b781-923c-4be6-9a0b-28b6493b93d4-kube-api-access-svgv9\") pod \"1a55b781-923c-4be6-9a0b-28b6493b93d4\" (UID: \"1a55b781-923c-4be6-9a0b-28b6493b93d4\") " Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.897075 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f710b69f-78df-41a3-84b7-b91560755137-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f710b69f-78df-41a3-84b7-b91560755137" (UID: "f710b69f-78df-41a3-84b7-b91560755137"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.898114 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f710b69f-78df-41a3-84b7-b91560755137-config-data" (OuterVolumeSpecName: "config-data") pod "f710b69f-78df-41a3-84b7-b91560755137" (UID: "f710b69f-78df-41a3-84b7-b91560755137"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.898215 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a55b781-923c-4be6-9a0b-28b6493b93d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a55b781-923c-4be6-9a0b-28b6493b93d4" (UID: "1a55b781-923c-4be6-9a0b-28b6493b93d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.898210 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a55b781-923c-4be6-9a0b-28b6493b93d4-kube-api-access-svgv9" (OuterVolumeSpecName: "kube-api-access-svgv9") pod "1a55b781-923c-4be6-9a0b-28b6493b93d4" (UID: "1a55b781-923c-4be6-9a0b-28b6493b93d4"). InnerVolumeSpecName "kube-api-access-svgv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.898617 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f710b69f-78df-41a3-84b7-b91560755137-kube-api-access-srpsh" (OuterVolumeSpecName: "kube-api-access-srpsh") pod "f710b69f-78df-41a3-84b7-b91560755137" (UID: "f710b69f-78df-41a3-84b7-b91560755137"). InnerVolumeSpecName "kube-api-access-srpsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.899329 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f710b69f-78df-41a3-84b7-b91560755137-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f710b69f-78df-41a3-84b7-b91560755137" (UID: "f710b69f-78df-41a3-84b7-b91560755137"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.899433 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a55b781-923c-4be6-9a0b-28b6493b93d4-config-data" (OuterVolumeSpecName: "config-data") pod "1a55b781-923c-4be6-9a0b-28b6493b93d4" (UID: "1a55b781-923c-4be6-9a0b-28b6493b93d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.900498 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a55b781-923c-4be6-9a0b-28b6493b93d4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1a55b781-923c-4be6-9a0b-28b6493b93d4" (UID: "1a55b781-923c-4be6-9a0b-28b6493b93d4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.952592 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58f7d46f6b-95bx9" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.962931 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58f7d46f6b-95bx9" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.962958 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58f7d46f6b-95bx9" event={"ID":"2b04a1d9-e072-4510-92cf-a0698dd7acd7","Type":"ContainerDied","Data":"be6ae9723bd016c2db6c438278c28309536cf60edaa13da682f9ee122a08308a"} Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.963037 4886 scope.go:117] "RemoveContainer" containerID="0c9890079ee151c143baf5edccbdf22b3f5baacb3e75321af03acc4f7e1fb033" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.975867 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58c681f2-3b01-4ab7-8ef5-4da1a59e267b","Type":"ContainerStarted","Data":"721eab40a3ebc7d523bfdf327c7833c157b58ef1788eae65a8244b177dce89a7"} Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.977372 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-846f97dcd5-gpczq" event={"ID":"1a55b781-923c-4be6-9a0b-28b6493b93d4","Type":"ContainerDied","Data":"12b3db2b27e49758c2e8125fad3d5de3f441973f8839447278eb3252f4475bd8"} Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.977441 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-846f97dcd5-gpczq" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.991250 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.992712 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78d5bd5977-hq7vk" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.994657 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78d5bd5977-hq7vk" event={"ID":"f710b69f-78df-41a3-84b7-b91560755137","Type":"ContainerDied","Data":"d6939056b7ff8ff2c7ce12f7815f8e7b10cd499c9e9a6829f533affbf9c41932"} Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.994802 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-wkgk6" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.996665 4886 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f710b69f-78df-41a3-84b7-b91560755137-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.996708 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a55b781-923c-4be6-9a0b-28b6493b93d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.996719 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srpsh\" (UniqueName: \"kubernetes.io/projected/f710b69f-78df-41a3-84b7-b91560755137-kube-api-access-srpsh\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.996730 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f710b69f-78df-41a3-84b7-b91560755137-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.996743 4886 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a55b781-923c-4be6-9a0b-28b6493b93d4-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.996753 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a55b781-923c-4be6-9a0b-28b6493b93d4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.996761 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f710b69f-78df-41a3-84b7-b91560755137-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:01 crc kubenswrapper[4886]: I0219 21:22:01.996771 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svgv9\" (UniqueName: \"kubernetes.io/projected/1a55b781-923c-4be6-9a0b-28b6493b93d4-kube-api-access-svgv9\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.013327 4886 scope.go:117] "RemoveContainer" containerID="3d4e8633cc2fd86c5b9b2b301ed0802ce18bb3059db2294553022197e3c44e8b" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.098775 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twkbn\" (UniqueName: \"kubernetes.io/projected/2b04a1d9-e072-4510-92cf-a0698dd7acd7-kube-api-access-twkbn\") pod \"2b04a1d9-e072-4510-92cf-a0698dd7acd7\" (UID: \"2b04a1d9-e072-4510-92cf-a0698dd7acd7\") " Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.098901 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2b04a1d9-e072-4510-92cf-a0698dd7acd7-config\") pod \"2b04a1d9-e072-4510-92cf-a0698dd7acd7\" (UID: \"2b04a1d9-e072-4510-92cf-a0698dd7acd7\") " Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.099449 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b04a1d9-e072-4510-92cf-a0698dd7acd7-combined-ca-bundle\") pod \"2b04a1d9-e072-4510-92cf-a0698dd7acd7\" (UID: \"2b04a1d9-e072-4510-92cf-a0698dd7acd7\") " Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.099564 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b04a1d9-e072-4510-92cf-a0698dd7acd7-ovndb-tls-certs\") pod \"2b04a1d9-e072-4510-92cf-a0698dd7acd7\" (UID: \"2b04a1d9-e072-4510-92cf-a0698dd7acd7\") " Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.099628 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2b04a1d9-e072-4510-92cf-a0698dd7acd7-httpd-config\") pod \"2b04a1d9-e072-4510-92cf-a0698dd7acd7\" (UID: \"2b04a1d9-e072-4510-92cf-a0698dd7acd7\") " Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.107522 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b04a1d9-e072-4510-92cf-a0698dd7acd7-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "2b04a1d9-e072-4510-92cf-a0698dd7acd7" (UID: "2b04a1d9-e072-4510-92cf-a0698dd7acd7"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.111305 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7b8b45588b-z55k4"] Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.161422 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b04a1d9-e072-4510-92cf-a0698dd7acd7-kube-api-access-twkbn" (OuterVolumeSpecName: "kube-api-access-twkbn") pod "2b04a1d9-e072-4510-92cf-a0698dd7acd7" (UID: "2b04a1d9-e072-4510-92cf-a0698dd7acd7"). InnerVolumeSpecName "kube-api-access-twkbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.208875 4886 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2b04a1d9-e072-4510-92cf-a0698dd7acd7-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.209086 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twkbn\" (UniqueName: \"kubernetes.io/projected/2b04a1d9-e072-4510-92cf-a0698dd7acd7-kube-api-access-twkbn\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.211211 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-846f97dcd5-gpczq"] Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.225138 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-846f97dcd5-gpczq"] Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.237133 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b04a1d9-e072-4510-92cf-a0698dd7acd7-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "2b04a1d9-e072-4510-92cf-a0698dd7acd7" (UID: "2b04a1d9-e072-4510-92cf-a0698dd7acd7"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.245237 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-78d5bd5977-hq7vk"] Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.268397 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-78d5bd5977-hq7vk"] Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.272462 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b04a1d9-e072-4510-92cf-a0698dd7acd7-config" (OuterVolumeSpecName: "config") pod "2b04a1d9-e072-4510-92cf-a0698dd7acd7" (UID: "2b04a1d9-e072-4510-92cf-a0698dd7acd7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.291930 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b04a1d9-e072-4510-92cf-a0698dd7acd7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b04a1d9-e072-4510-92cf-a0698dd7acd7" (UID: "2b04a1d9-e072-4510-92cf-a0698dd7acd7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.292874 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.305860 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.310953 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b04a1d9-e072-4510-92cf-a0698dd7acd7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.310995 4886 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b04a1d9-e072-4510-92cf-a0698dd7acd7-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.319127 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2b04a1d9-e072-4510-92cf-a0698dd7acd7-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.328244 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.328509 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d3201a6a-0782-4c3e-b43d-89ce0f4a029c" containerName="glance-log" containerID="cri-o://2f04fff7258c62bb56b4027d02fed09cb706df34b4a1d1aaaf0ea4be84c5da4f" gracePeriod=30 Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.328957 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d3201a6a-0782-4c3e-b43d-89ce0f4a029c" containerName="glance-httpd" containerID="cri-o://d7b02b86404b7721f6376a8a8b25168dc3a5997994841ee83aa3cf4a782c738b" gracePeriod=30 Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.346806 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 21:22:02 crc kubenswrapper[4886]: E0219 21:22:02.347661 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40edad44-68bc-4fbc-9ceb-881436065e53" containerName="init" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.347677 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="40edad44-68bc-4fbc-9ceb-881436065e53" containerName="init" Feb 19 21:22:02 crc kubenswrapper[4886]: E0219 21:22:02.347693 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1" containerName="cinder-api" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.347699 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1" containerName="cinder-api" Feb 19 21:22:02 crc kubenswrapper[4886]: E0219 21:22:02.347748 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b04a1d9-e072-4510-92cf-a0698dd7acd7" containerName="neutron-api" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.347755 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b04a1d9-e072-4510-92cf-a0698dd7acd7" containerName="neutron-api" Feb 19 21:22:02 crc kubenswrapper[4886]: E0219 21:22:02.347777 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40edad44-68bc-4fbc-9ceb-881436065e53" containerName="dnsmasq-dns" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.347783 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="40edad44-68bc-4fbc-9ceb-881436065e53" containerName="dnsmasq-dns" Feb 19 21:22:02 crc kubenswrapper[4886]: E0219 21:22:02.347809 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1" containerName="cinder-api-log" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.347815 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1" containerName="cinder-api-log" Feb 19 21:22:02 crc kubenswrapper[4886]: E0219 21:22:02.347827 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b04a1d9-e072-4510-92cf-a0698dd7acd7" containerName="neutron-httpd" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.347833 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b04a1d9-e072-4510-92cf-a0698dd7acd7" containerName="neutron-httpd" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.348041 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1" containerName="cinder-api-log" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.348054 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b04a1d9-e072-4510-92cf-a0698dd7acd7" containerName="neutron-httpd" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.348069 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="40edad44-68bc-4fbc-9ceb-881436065e53" containerName="dnsmasq-dns" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.348082 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b04a1d9-e072-4510-92cf-a0698dd7acd7" containerName="neutron-api" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.348101 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1" containerName="cinder-api" Feb 19 21:22:02 crc kubenswrapper[4886]: W0219 21:22:02.349507 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5304d1d_93a6_42a7_9e2f_e86e83a8e699.slice/crio-c32a42602854f489e55a91ddbba0b37b161dc4c63dd0015ad6715e3b3bd31182 WatchSource:0}: Error finding container c32a42602854f489e55a91ddbba0b37b161dc4c63dd0015ad6715e3b3bd31182: Status 404 returned error can't find the container with id c32a42602854f489e55a91ddbba0b37b161dc4c63dd0015ad6715e3b3bd31182 Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.350970 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.353613 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.354705 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.354845 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.368296 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-wkgk6"] Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.384605 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.395230 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-wkgk6"] Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.406762 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6d7bdc66c5-25hmr"] Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.420379 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-689bb999db-sc7pf"] Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.421918 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.421961 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-config-data\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.421987 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-scripts\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.422016 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.422044 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-logs\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.422123 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.422151 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mq4r\" (UniqueName: \"kubernetes.io/projected/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-kube-api-access-6mq4r\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.422188 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-config-data-custom\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.422254 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.431194 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-8459f774f5-9dr6c"] Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.443857 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-76667cbdb5-lcq2d"] Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.525185 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.525237 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-config-data\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.525281 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-scripts\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.525308 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.525338 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-logs\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.525434 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.525472 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mq4r\" (UniqueName: \"kubernetes.io/projected/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-kube-api-access-6mq4r\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.525519 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-config-data-custom\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.525600 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.530821 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.531492 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.531781 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-logs\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.534008 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.539576 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.539840 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-config-data-custom\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.539978 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-scripts\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.540835 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-config-data\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.550762 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mq4r\" (UniqueName: \"kubernetes.io/projected/e29e342d-8db0-4a72-b5f6-9aebda1fcf06-kube-api-access-6mq4r\") pod \"cinder-api-0\" (UID: \"e29e342d-8db0-4a72-b5f6-9aebda1fcf06\") " pod="openstack/cinder-api-0" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.633973 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a55b781-923c-4be6-9a0b-28b6493b93d4" path="/var/lib/kubelet/pods/1a55b781-923c-4be6-9a0b-28b6493b93d4/volumes" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.634393 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40edad44-68bc-4fbc-9ceb-881436065e53" path="/var/lib/kubelet/pods/40edad44-68bc-4fbc-9ceb-881436065e53/volumes" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.635001 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1" path="/var/lib/kubelet/pods/9cd08739-53bf-4b73-a5c9-8d51d2a7f6c1/volumes" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.638932 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f710b69f-78df-41a3-84b7-b91560755137" path="/var/lib/kubelet/pods/f710b69f-78df-41a3-84b7-b91560755137/volumes" Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.639384 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-58f7d46f6b-95bx9"] Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.643736 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-58f7d46f6b-95bx9"] Feb 19 21:22:02 crc kubenswrapper[4886]: I0219 21:22:02.683781 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 21:22:03 crc kubenswrapper[4886]: I0219 21:22:03.024970 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b8b45588b-z55k4" event={"ID":"70bf22da-3158-430f-9485-fdd03fd4b6ff","Type":"ContainerStarted","Data":"f02416e918182c73d4b06fb0164cb53ba9f25f13d3706d93c8ad5cc616deb228"} Feb 19 21:22:03 crc kubenswrapper[4886]: I0219 21:22:03.061969 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-76667cbdb5-lcq2d" event={"ID":"c5304d1d-93a6-42a7-9e2f-e86e83a8e699","Type":"ContainerStarted","Data":"5336fbaa74d89376707a74fbef1aa8baa31e79d42046c56350ac7a30fc030887"} Feb 19 21:22:03 crc kubenswrapper[4886]: I0219 21:22:03.062368 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-76667cbdb5-lcq2d" event={"ID":"c5304d1d-93a6-42a7-9e2f-e86e83a8e699","Type":"ContainerStarted","Data":"c32a42602854f489e55a91ddbba0b37b161dc4c63dd0015ad6715e3b3bd31182"} Feb 19 21:22:03 crc kubenswrapper[4886]: I0219 21:22:03.067465 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-76667cbdb5-lcq2d" Feb 19 21:22:03 crc kubenswrapper[4886]: I0219 21:22:03.081499 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-76667cbdb5-lcq2d" podStartSLOduration=11.081484714 podStartE2EDuration="11.081484714s" podCreationTimestamp="2026-02-19 21:21:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:22:03.080216643 +0000 UTC m=+1353.708059693" watchObservedRunningTime="2026-02-19 21:22:03.081484714 +0000 UTC m=+1353.709327764" Feb 19 21:22:03 crc kubenswrapper[4886]: I0219 21:22:03.082863 4886 generic.go:334] "Generic (PLEG): container finished" podID="d3201a6a-0782-4c3e-b43d-89ce0f4a029c" containerID="2f04fff7258c62bb56b4027d02fed09cb706df34b4a1d1aaaf0ea4be84c5da4f" exitCode=143 Feb 19 21:22:03 crc kubenswrapper[4886]: I0219 21:22:03.082937 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d3201a6a-0782-4c3e-b43d-89ce0f4a029c","Type":"ContainerDied","Data":"2f04fff7258c62bb56b4027d02fed09cb706df34b4a1d1aaaf0ea4be84c5da4f"} Feb 19 21:22:03 crc kubenswrapper[4886]: I0219 21:22:03.086341 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d7bdc66c5-25hmr" event={"ID":"c2dbc6be-5448-4402-b63d-240f82bb94de","Type":"ContainerStarted","Data":"fd7344445c658b4a70400cfd0b672189e992bb5ac22f7ededed5d083d9244ea0"} Feb 19 21:22:03 crc kubenswrapper[4886]: I0219 21:22:03.090185 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-689bb999db-sc7pf" event={"ID":"87cb43cc-15f0-4ea9-8731-b664027ff5c9","Type":"ContainerStarted","Data":"eb2c6d7ba1ff6b4efa0fcd21d3ef86bb869c96d0f664b6d5ab0ae7aea3f4cb8f"} Feb 19 21:22:03 crc kubenswrapper[4886]: I0219 21:22:03.093681 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8459f774f5-9dr6c" event={"ID":"aefd494a-9837-494b-8bbe-76f6dfc77f5d","Type":"ContainerStarted","Data":"dd67d620b65886295d82aec1355291e66246b3215cb80f74655398828240ecff"} Feb 19 21:22:03 crc kubenswrapper[4886]: I0219 21:22:03.481300 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 21:22:04 crc kubenswrapper[4886]: I0219 21:22:04.117304 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d7bdc66c5-25hmr" event={"ID":"c2dbc6be-5448-4402-b63d-240f82bb94de","Type":"ContainerStarted","Data":"89053bb196140a9f6436b23debe8e29c8910cd56c9caf6579c26e7556eb9bc55"} Feb 19 21:22:04 crc kubenswrapper[4886]: I0219 21:22:04.119121 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6d7bdc66c5-25hmr" Feb 19 21:22:04 crc kubenswrapper[4886]: I0219 21:22:04.121171 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e29e342d-8db0-4a72-b5f6-9aebda1fcf06","Type":"ContainerStarted","Data":"41a8007ea116f03ed45980601f230c4e4801aa06dec95e9c6341e9176629101f"} Feb 19 21:22:04 crc kubenswrapper[4886]: I0219 21:22:04.148168 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6d7bdc66c5-25hmr" podStartSLOduration=8.416860654 podStartE2EDuration="9.14815024s" podCreationTimestamp="2026-02-19 21:21:55 +0000 UTC" firstStartedPulling="2026-02-19 21:22:02.314175859 +0000 UTC m=+1352.942018909" lastFinishedPulling="2026-02-19 21:22:03.045465445 +0000 UTC m=+1353.673308495" observedRunningTime="2026-02-19 21:22:04.137495937 +0000 UTC m=+1354.765338997" watchObservedRunningTime="2026-02-19 21:22:04.14815024 +0000 UTC m=+1354.775993300" Feb 19 21:22:04 crc kubenswrapper[4886]: I0219 21:22:04.155818 4886 generic.go:334] "Generic (PLEG): container finished" podID="87cb43cc-15f0-4ea9-8731-b664027ff5c9" containerID="67b28bf0889ce33bc98d1d79aaa2d139239dd9042457261ecf33127cd211b978" exitCode=1 Feb 19 21:22:04 crc kubenswrapper[4886]: I0219 21:22:04.156109 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-689bb999db-sc7pf" event={"ID":"87cb43cc-15f0-4ea9-8731-b664027ff5c9","Type":"ContainerDied","Data":"67b28bf0889ce33bc98d1d79aaa2d139239dd9042457261ecf33127cd211b978"} Feb 19 21:22:04 crc kubenswrapper[4886]: I0219 21:22:04.156855 4886 scope.go:117] "RemoveContainer" containerID="67b28bf0889ce33bc98d1d79aaa2d139239dd9042457261ecf33127cd211b978" Feb 19 21:22:04 crc kubenswrapper[4886]: I0219 21:22:04.160605 4886 generic.go:334] "Generic (PLEG): container finished" podID="aefd494a-9837-494b-8bbe-76f6dfc77f5d" containerID="8afc273e60a604463886cca9552e370264f4f672ff768f88df4aeaac259401a4" exitCode=1 Feb 19 21:22:04 crc kubenswrapper[4886]: I0219 21:22:04.160710 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8459f774f5-9dr6c" event={"ID":"aefd494a-9837-494b-8bbe-76f6dfc77f5d","Type":"ContainerDied","Data":"8afc273e60a604463886cca9552e370264f4f672ff768f88df4aeaac259401a4"} Feb 19 21:22:04 crc kubenswrapper[4886]: I0219 21:22:04.161595 4886 scope.go:117] "RemoveContainer" containerID="8afc273e60a604463886cca9552e370264f4f672ff768f88df4aeaac259401a4" Feb 19 21:22:04 crc kubenswrapper[4886]: I0219 21:22:04.165006 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b8b45588b-z55k4" event={"ID":"70bf22da-3158-430f-9485-fdd03fd4b6ff","Type":"ContainerStarted","Data":"7e787a7817655415e497bd3652bf130deef61c88ec151456a34f41264170d90c"} Feb 19 21:22:04 crc kubenswrapper[4886]: I0219 21:22:04.165558 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7b8b45588b-z55k4" Feb 19 21:22:04 crc kubenswrapper[4886]: I0219 21:22:04.173484 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58c681f2-3b01-4ab7-8ef5-4da1a59e267b","Type":"ContainerStarted","Data":"a96052d07486d42b40d471bf33ecf90d7a0c827964a73356413fc02a47c038a3"} Feb 19 21:22:04 crc kubenswrapper[4886]: I0219 21:22:04.177578 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58c681f2-3b01-4ab7-8ef5-4da1a59e267b" containerName="ceilometer-central-agent" containerID="cri-o://8e064cff8f4ab7705b567c5e7be54a95d3a85b9bb76bfbac3285c7a4dbdc167c" gracePeriod=30 Feb 19 21:22:04 crc kubenswrapper[4886]: I0219 21:22:04.177641 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58c681f2-3b01-4ab7-8ef5-4da1a59e267b" containerName="sg-core" containerID="cri-o://721eab40a3ebc7d523bfdf327c7833c157b58ef1788eae65a8244b177dce89a7" gracePeriod=30 Feb 19 21:22:04 crc kubenswrapper[4886]: I0219 21:22:04.177664 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58c681f2-3b01-4ab7-8ef5-4da1a59e267b" containerName="proxy-httpd" containerID="cri-o://a96052d07486d42b40d471bf33ecf90d7a0c827964a73356413fc02a47c038a3" gracePeriod=30 Feb 19 21:22:04 crc kubenswrapper[4886]: I0219 21:22:04.177675 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58c681f2-3b01-4ab7-8ef5-4da1a59e267b" containerName="ceilometer-notification-agent" containerID="cri-o://1cf2f2e607b7aaff4d991ca6d93915ad51f6b7906c2e495e32fc94a75b3ea9c1" gracePeriod=30 Feb 19 21:22:04 crc kubenswrapper[4886]: I0219 21:22:04.178277 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 21:22:04 crc kubenswrapper[4886]: I0219 21:22:04.211182 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7b8b45588b-z55k4" podStartSLOduration=8.743056908 podStartE2EDuration="9.211131685s" podCreationTimestamp="2026-02-19 21:21:55 +0000 UTC" firstStartedPulling="2026-02-19 21:22:02.064418272 +0000 UTC m=+1352.692261322" lastFinishedPulling="2026-02-19 21:22:02.532493049 +0000 UTC m=+1353.160336099" observedRunningTime="2026-02-19 21:22:04.195283334 +0000 UTC m=+1354.823126374" watchObservedRunningTime="2026-02-19 21:22:04.211131685 +0000 UTC m=+1354.838974735" Feb 19 21:22:04 crc kubenswrapper[4886]: I0219 21:22:04.270939 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.634100169 podStartE2EDuration="20.270922122s" podCreationTimestamp="2026-02-19 21:21:44 +0000 UTC" firstStartedPulling="2026-02-19 21:21:45.810976005 +0000 UTC m=+1336.438819055" lastFinishedPulling="2026-02-19 21:22:03.447797958 +0000 UTC m=+1354.075641008" observedRunningTime="2026-02-19 21:22:04.23318935 +0000 UTC m=+1354.861032400" watchObservedRunningTime="2026-02-19 21:22:04.270922122 +0000 UTC m=+1354.898765172" Feb 19 21:22:04 crc kubenswrapper[4886]: I0219 21:22:04.619104 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b04a1d9-e072-4510-92cf-a0698dd7acd7" path="/var/lib/kubelet/pods/2b04a1d9-e072-4510-92cf-a0698dd7acd7/volumes" Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.187622 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e29e342d-8db0-4a72-b5f6-9aebda1fcf06","Type":"ContainerStarted","Data":"c4d9439d1f1b30bbcef1e25816a4d8b6247e5a956fa5ee5a75baeae2c2ce7b8b"} Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.198209 4886 generic.go:334] "Generic (PLEG): container finished" podID="87cb43cc-15f0-4ea9-8731-b664027ff5c9" containerID="af0916051d0b6b70c642d8cff0dc74491d9bdb4ec930fa2351f2718a7a4e89fb" exitCode=1 Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.198484 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-689bb999db-sc7pf" event={"ID":"87cb43cc-15f0-4ea9-8731-b664027ff5c9","Type":"ContainerDied","Data":"af0916051d0b6b70c642d8cff0dc74491d9bdb4ec930fa2351f2718a7a4e89fb"} Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.198514 4886 scope.go:117] "RemoveContainer" containerID="67b28bf0889ce33bc98d1d79aaa2d139239dd9042457261ecf33127cd211b978" Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.199295 4886 scope.go:117] "RemoveContainer" containerID="af0916051d0b6b70c642d8cff0dc74491d9bdb4ec930fa2351f2718a7a4e89fb" Feb 19 21:22:05 crc kubenswrapper[4886]: E0219 21:22:05.199627 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-689bb999db-sc7pf_openstack(87cb43cc-15f0-4ea9-8731-b664027ff5c9)\"" pod="openstack/heat-api-689bb999db-sc7pf" podUID="87cb43cc-15f0-4ea9-8731-b664027ff5c9" Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.230840 4886 generic.go:334] "Generic (PLEG): container finished" podID="aefd494a-9837-494b-8bbe-76f6dfc77f5d" containerID="a4eb4fc2f44870a8ba5e4215a0dbc8dcbdf886159c9eca4231a30acf35aa1d42" exitCode=1 Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.230958 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8459f774f5-9dr6c" event={"ID":"aefd494a-9837-494b-8bbe-76f6dfc77f5d","Type":"ContainerDied","Data":"a4eb4fc2f44870a8ba5e4215a0dbc8dcbdf886159c9eca4231a30acf35aa1d42"} Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.231729 4886 scope.go:117] "RemoveContainer" containerID="a4eb4fc2f44870a8ba5e4215a0dbc8dcbdf886159c9eca4231a30acf35aa1d42" Feb 19 21:22:05 crc kubenswrapper[4886]: E0219 21:22:05.232046 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-8459f774f5-9dr6c_openstack(aefd494a-9837-494b-8bbe-76f6dfc77f5d)\"" pod="openstack/heat-cfnapi-8459f774f5-9dr6c" podUID="aefd494a-9837-494b-8bbe-76f6dfc77f5d" Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.253203 4886 generic.go:334] "Generic (PLEG): container finished" podID="58c681f2-3b01-4ab7-8ef5-4da1a59e267b" containerID="a96052d07486d42b40d471bf33ecf90d7a0c827964a73356413fc02a47c038a3" exitCode=0 Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.253240 4886 generic.go:334] "Generic (PLEG): container finished" podID="58c681f2-3b01-4ab7-8ef5-4da1a59e267b" containerID="721eab40a3ebc7d523bfdf327c7833c157b58ef1788eae65a8244b177dce89a7" exitCode=2 Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.253250 4886 generic.go:334] "Generic (PLEG): container finished" podID="58c681f2-3b01-4ab7-8ef5-4da1a59e267b" containerID="1cf2f2e607b7aaff4d991ca6d93915ad51f6b7906c2e495e32fc94a75b3ea9c1" exitCode=0 Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.253288 4886 generic.go:334] "Generic (PLEG): container finished" podID="58c681f2-3b01-4ab7-8ef5-4da1a59e267b" containerID="8e064cff8f4ab7705b567c5e7be54a95d3a85b9bb76bfbac3285c7a4dbdc167c" exitCode=0 Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.253302 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58c681f2-3b01-4ab7-8ef5-4da1a59e267b","Type":"ContainerDied","Data":"a96052d07486d42b40d471bf33ecf90d7a0c827964a73356413fc02a47c038a3"} Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.253363 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58c681f2-3b01-4ab7-8ef5-4da1a59e267b","Type":"ContainerDied","Data":"721eab40a3ebc7d523bfdf327c7833c157b58ef1788eae65a8244b177dce89a7"} Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.253388 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58c681f2-3b01-4ab7-8ef5-4da1a59e267b","Type":"ContainerDied","Data":"1cf2f2e607b7aaff4d991ca6d93915ad51f6b7906c2e495e32fc94a75b3ea9c1"} Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.253400 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58c681f2-3b01-4ab7-8ef5-4da1a59e267b","Type":"ContainerDied","Data":"8e064cff8f4ab7705b567c5e7be54a95d3a85b9bb76bfbac3285c7a4dbdc167c"} Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.280148 4886 scope.go:117] "RemoveContainer" containerID="8afc273e60a604463886cca9552e370264f4f672ff768f88df4aeaac259401a4" Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.407495 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.548082 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9wdb\" (UniqueName: \"kubernetes.io/projected/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-kube-api-access-n9wdb\") pod \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.548209 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-log-httpd\") pod \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.548402 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-scripts\") pod \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.548594 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-combined-ca-bundle\") pod \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.548621 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-config-data\") pod \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.548675 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-sg-core-conf-yaml\") pod \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.548713 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-run-httpd\") pod \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\" (UID: \"58c681f2-3b01-4ab7-8ef5-4da1a59e267b\") " Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.549297 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "58c681f2-3b01-4ab7-8ef5-4da1a59e267b" (UID: "58c681f2-3b01-4ab7-8ef5-4da1a59e267b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.549934 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "58c681f2-3b01-4ab7-8ef5-4da1a59e267b" (UID: "58c681f2-3b01-4ab7-8ef5-4da1a59e267b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.550106 4886 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.554787 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-scripts" (OuterVolumeSpecName: "scripts") pod "58c681f2-3b01-4ab7-8ef5-4da1a59e267b" (UID: "58c681f2-3b01-4ab7-8ef5-4da1a59e267b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.554878 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-kube-api-access-n9wdb" (OuterVolumeSpecName: "kube-api-access-n9wdb") pod "58c681f2-3b01-4ab7-8ef5-4da1a59e267b" (UID: "58c681f2-3b01-4ab7-8ef5-4da1a59e267b"). InnerVolumeSpecName "kube-api-access-n9wdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.593069 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "58c681f2-3b01-4ab7-8ef5-4da1a59e267b" (UID: "58c681f2-3b01-4ab7-8ef5-4da1a59e267b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.631591 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58c681f2-3b01-4ab7-8ef5-4da1a59e267b" (UID: "58c681f2-3b01-4ab7-8ef5-4da1a59e267b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.652340 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.652369 4886 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.652381 4886 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.652392 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9wdb\" (UniqueName: \"kubernetes.io/projected/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-kube-api-access-n9wdb\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.652402 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.675453 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-config-data" (OuterVolumeSpecName: "config-data") pod "58c681f2-3b01-4ab7-8ef5-4da1a59e267b" (UID: "58c681f2-3b01-4ab7-8ef5-4da1a59e267b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:05 crc kubenswrapper[4886]: I0219 21:22:05.754711 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c681f2-3b01-4ab7-8ef5-4da1a59e267b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.179972 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.242913 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6468c4987f-44r29" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.271459 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\") pod \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.271639 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-scripts\") pod \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.271674 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-logs\") pod \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.271703 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-httpd-run\") pod \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.271768 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8lqm\" (UniqueName: \"kubernetes.io/projected/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-kube-api-access-f8lqm\") pod \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.271793 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-combined-ca-bundle\") pod \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.271905 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-internal-tls-certs\") pod \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.271922 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-config-data\") pod \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\" (UID: \"d3201a6a-0782-4c3e-b43d-89ce0f4a029c\") " Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.272685 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d3201a6a-0782-4c3e-b43d-89ce0f4a029c" (UID: "d3201a6a-0782-4c3e-b43d-89ce0f4a029c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.279987 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-scripts" (OuterVolumeSpecName: "scripts") pod "d3201a6a-0782-4c3e-b43d-89ce0f4a029c" (UID: "d3201a6a-0782-4c3e-b43d-89ce0f4a029c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.280292 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-logs" (OuterVolumeSpecName: "logs") pod "d3201a6a-0782-4c3e-b43d-89ce0f4a029c" (UID: "d3201a6a-0782-4c3e-b43d-89ce0f4a029c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.282391 4886 generic.go:334] "Generic (PLEG): container finished" podID="d3201a6a-0782-4c3e-b43d-89ce0f4a029c" containerID="d7b02b86404b7721f6376a8a8b25168dc3a5997994841ee83aa3cf4a782c738b" exitCode=0 Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.282480 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d3201a6a-0782-4c3e-b43d-89ce0f4a029c","Type":"ContainerDied","Data":"d7b02b86404b7721f6376a8a8b25168dc3a5997994841ee83aa3cf4a782c738b"} Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.282515 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d3201a6a-0782-4c3e-b43d-89ce0f4a029c","Type":"ContainerDied","Data":"2d5567b8c0f4265b70f08a86fd4db7f6c240e6139de82bb83e6169ae004b0dbb"} Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.282537 4886 scope.go:117] "RemoveContainer" containerID="d7b02b86404b7721f6376a8a8b25168dc3a5997994841ee83aa3cf4a782c738b" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.282671 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.310385 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-kube-api-access-f8lqm" (OuterVolumeSpecName: "kube-api-access-f8lqm") pod "d3201a6a-0782-4c3e-b43d-89ce0f4a029c" (UID: "d3201a6a-0782-4c3e-b43d-89ce0f4a029c"). InnerVolumeSpecName "kube-api-access-f8lqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.312462 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e29e342d-8db0-4a72-b5f6-9aebda1fcf06","Type":"ContainerStarted","Data":"7e18af142c004fdeab039a153e7ade167f29e6a980446bc10a5fa94eea1505ee"} Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.313761 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.329744 4886 scope.go:117] "RemoveContainer" containerID="af0916051d0b6b70c642d8cff0dc74491d9bdb4ec930fa2351f2718a7a4e89fb" Feb 19 21:22:06 crc kubenswrapper[4886]: E0219 21:22:06.330006 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-689bb999db-sc7pf_openstack(87cb43cc-15f0-4ea9-8731-b664027ff5c9)\"" pod="openstack/heat-api-689bb999db-sc7pf" podUID="87cb43cc-15f0-4ea9-8731-b664027ff5c9" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.341252 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3201a6a-0782-4c3e-b43d-89ce0f4a029c" (UID: "d3201a6a-0782-4c3e-b43d-89ce0f4a029c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.374387 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.374414 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-logs\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.374423 4886 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.374431 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8lqm\" (UniqueName: \"kubernetes.io/projected/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-kube-api-access-f8lqm\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.374439 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.402825 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.402800327 podStartE2EDuration="4.402800327s" podCreationTimestamp="2026-02-19 21:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:22:06.359514289 +0000 UTC m=+1356.987357339" watchObservedRunningTime="2026-02-19 21:22:06.402800327 +0000 UTC m=+1357.030643377" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.424405 4886 scope.go:117] "RemoveContainer" containerID="a4eb4fc2f44870a8ba5e4215a0dbc8dcbdf886159c9eca4231a30acf35aa1d42" Feb 19 21:22:06 crc kubenswrapper[4886]: E0219 21:22:06.425829 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-8459f774f5-9dr6c_openstack(aefd494a-9837-494b-8bbe-76f6dfc77f5d)\"" pod="openstack/heat-cfnapi-8459f774f5-9dr6c" podUID="aefd494a-9837-494b-8bbe-76f6dfc77f5d" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.431166 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-59d9fab8-7cd6-4599-9419-be74d0657eef" (OuterVolumeSpecName: "glance") pod "d3201a6a-0782-4c3e-b43d-89ce0f4a029c" (UID: "d3201a6a-0782-4c3e-b43d-89ce0f4a029c"). InnerVolumeSpecName "pvc-59d9fab8-7cd6-4599-9419-be74d0657eef". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.431501 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.431522 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58c681f2-3b01-4ab7-8ef5-4da1a59e267b","Type":"ContainerDied","Data":"8b67a65b797ebbca6c0ac6c218b284e2c2034dca55fbd84c91ec1ae27cf26f6b"} Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.463835 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d3201a6a-0782-4c3e-b43d-89ce0f4a029c" (UID: "d3201a6a-0782-4c3e-b43d-89ce0f4a029c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.472548 4886 scope.go:117] "RemoveContainer" containerID="2f04fff7258c62bb56b4027d02fed09cb706df34b4a1d1aaaf0ea4be84c5da4f" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.498041 4886 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.498113 4886 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\") on node \"crc\" " Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.513829 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-config-data" (OuterVolumeSpecName: "config-data") pod "d3201a6a-0782-4c3e-b43d-89ce0f4a029c" (UID: "d3201a6a-0782-4c3e-b43d-89ce0f4a029c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.555485 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.563993 4886 scope.go:117] "RemoveContainer" containerID="d7b02b86404b7721f6376a8a8b25168dc3a5997994841ee83aa3cf4a782c738b" Feb 19 21:22:06 crc kubenswrapper[4886]: E0219 21:22:06.564621 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7b02b86404b7721f6376a8a8b25168dc3a5997994841ee83aa3cf4a782c738b\": container with ID starting with d7b02b86404b7721f6376a8a8b25168dc3a5997994841ee83aa3cf4a782c738b not found: ID does not exist" containerID="d7b02b86404b7721f6376a8a8b25168dc3a5997994841ee83aa3cf4a782c738b" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.564678 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7b02b86404b7721f6376a8a8b25168dc3a5997994841ee83aa3cf4a782c738b"} err="failed to get container status \"d7b02b86404b7721f6376a8a8b25168dc3a5997994841ee83aa3cf4a782c738b\": rpc error: code = NotFound desc = could not find container \"d7b02b86404b7721f6376a8a8b25168dc3a5997994841ee83aa3cf4a782c738b\": container with ID starting with d7b02b86404b7721f6376a8a8b25168dc3a5997994841ee83aa3cf4a782c738b not found: ID does not exist" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.564751 4886 scope.go:117] "RemoveContainer" containerID="2f04fff7258c62bb56b4027d02fed09cb706df34b4a1d1aaaf0ea4be84c5da4f" Feb 19 21:22:06 crc kubenswrapper[4886]: E0219 21:22:06.565333 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f04fff7258c62bb56b4027d02fed09cb706df34b4a1d1aaaf0ea4be84c5da4f\": container with ID starting with 2f04fff7258c62bb56b4027d02fed09cb706df34b4a1d1aaaf0ea4be84c5da4f not found: ID does not exist" containerID="2f04fff7258c62bb56b4027d02fed09cb706df34b4a1d1aaaf0ea4be84c5da4f" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.565364 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f04fff7258c62bb56b4027d02fed09cb706df34b4a1d1aaaf0ea4be84c5da4f"} err="failed to get container status \"2f04fff7258c62bb56b4027d02fed09cb706df34b4a1d1aaaf0ea4be84c5da4f\": rpc error: code = NotFound desc = could not find container \"2f04fff7258c62bb56b4027d02fed09cb706df34b4a1d1aaaf0ea4be84c5da4f\": container with ID starting with 2f04fff7258c62bb56b4027d02fed09cb706df34b4a1d1aaaf0ea4be84c5da4f not found: ID does not exist" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.565382 4886 scope.go:117] "RemoveContainer" containerID="a96052d07486d42b40d471bf33ecf90d7a0c827964a73356413fc02a47c038a3" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.566876 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.568018 4886 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.568185 4886 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-59d9fab8-7cd6-4599-9419-be74d0657eef" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-59d9fab8-7cd6-4599-9419-be74d0657eef") on node "crc" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.579097 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:22:06 crc kubenswrapper[4886]: E0219 21:22:06.579649 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3201a6a-0782-4c3e-b43d-89ce0f4a029c" containerName="glance-log" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.579666 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3201a6a-0782-4c3e-b43d-89ce0f4a029c" containerName="glance-log" Feb 19 21:22:06 crc kubenswrapper[4886]: E0219 21:22:06.579692 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3201a6a-0782-4c3e-b43d-89ce0f4a029c" containerName="glance-httpd" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.579699 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3201a6a-0782-4c3e-b43d-89ce0f4a029c" containerName="glance-httpd" Feb 19 21:22:06 crc kubenswrapper[4886]: E0219 21:22:06.579713 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c681f2-3b01-4ab7-8ef5-4da1a59e267b" containerName="sg-core" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.579719 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c681f2-3b01-4ab7-8ef5-4da1a59e267b" containerName="sg-core" Feb 19 21:22:06 crc kubenswrapper[4886]: E0219 21:22:06.579740 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c681f2-3b01-4ab7-8ef5-4da1a59e267b" containerName="ceilometer-central-agent" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.579746 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c681f2-3b01-4ab7-8ef5-4da1a59e267b" containerName="ceilometer-central-agent" Feb 19 21:22:06 crc kubenswrapper[4886]: E0219 21:22:06.579754 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c681f2-3b01-4ab7-8ef5-4da1a59e267b" containerName="ceilometer-notification-agent" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.579760 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c681f2-3b01-4ab7-8ef5-4da1a59e267b" containerName="ceilometer-notification-agent" Feb 19 21:22:06 crc kubenswrapper[4886]: E0219 21:22:06.579784 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c681f2-3b01-4ab7-8ef5-4da1a59e267b" containerName="proxy-httpd" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.579792 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c681f2-3b01-4ab7-8ef5-4da1a59e267b" containerName="proxy-httpd" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.579983 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="58c681f2-3b01-4ab7-8ef5-4da1a59e267b" containerName="proxy-httpd" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.579997 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="58c681f2-3b01-4ab7-8ef5-4da1a59e267b" containerName="ceilometer-central-agent" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.580009 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="58c681f2-3b01-4ab7-8ef5-4da1a59e267b" containerName="sg-core" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.580020 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3201a6a-0782-4c3e-b43d-89ce0f4a029c" containerName="glance-httpd" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.580037 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3201a6a-0782-4c3e-b43d-89ce0f4a029c" containerName="glance-log" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.580043 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="58c681f2-3b01-4ab7-8ef5-4da1a59e267b" containerName="ceilometer-notification-agent" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.582045 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.584380 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.584556 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.585112 4886 scope.go:117] "RemoveContainer" containerID="721eab40a3ebc7d523bfdf327c7833c157b58ef1788eae65a8244b177dce89a7" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.590235 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.599751 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3201a6a-0782-4c3e-b43d-89ce0f4a029c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.599796 4886 reconciler_common.go:293] "Volume detached for volume \"pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.613080 4886 scope.go:117] "RemoveContainer" containerID="1cf2f2e607b7aaff4d991ca6d93915ad51f6b7906c2e495e32fc94a75b3ea9c1" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.619205 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58c681f2-3b01-4ab7-8ef5-4da1a59e267b" path="/var/lib/kubelet/pods/58c681f2-3b01-4ab7-8ef5-4da1a59e267b/volumes" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.635807 4886 scope.go:117] "RemoveContainer" containerID="8e064cff8f4ab7705b567c5e7be54a95d3a85b9bb76bfbac3285c7a4dbdc167c" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.643731 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.665805 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.694534 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.696516 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.699107 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.701169 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.702467 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b480733-19ba-4138-a626-6f68d33a3c61-run-httpd\") pod \"ceilometer-0\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " pod="openstack/ceilometer-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.702569 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b480733-19ba-4138-a626-6f68d33a3c61-scripts\") pod \"ceilometer-0\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " pod="openstack/ceilometer-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.702925 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b480733-19ba-4138-a626-6f68d33a3c61-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " pod="openstack/ceilometer-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.702947 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b480733-19ba-4138-a626-6f68d33a3c61-config-data\") pod \"ceilometer-0\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " pod="openstack/ceilometer-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.703075 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b480733-19ba-4138-a626-6f68d33a3c61-log-httpd\") pod \"ceilometer-0\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " pod="openstack/ceilometer-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.703201 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9zv8\" (UniqueName: \"kubernetes.io/projected/3b480733-19ba-4138-a626-6f68d33a3c61-kube-api-access-l9zv8\") pod \"ceilometer-0\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " pod="openstack/ceilometer-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.703227 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b480733-19ba-4138-a626-6f68d33a3c61-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " pod="openstack/ceilometer-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.715137 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.805996 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b480733-19ba-4138-a626-6f68d33a3c61-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " pod="openstack/ceilometer-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.806051 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b480733-19ba-4138-a626-6f68d33a3c61-config-data\") pod \"ceilometer-0\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " pod="openstack/ceilometer-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.806088 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b480733-19ba-4138-a626-6f68d33a3c61-log-httpd\") pod \"ceilometer-0\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " pod="openstack/ceilometer-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.806197 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9zv8\" (UniqueName: \"kubernetes.io/projected/3b480733-19ba-4138-a626-6f68d33a3c61-kube-api-access-l9zv8\") pod \"ceilometer-0\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " pod="openstack/ceilometer-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.806229 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b480733-19ba-4138-a626-6f68d33a3c61-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " pod="openstack/ceilometer-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.806282 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b480733-19ba-4138-a626-6f68d33a3c61-run-httpd\") pod \"ceilometer-0\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " pod="openstack/ceilometer-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.806395 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b480733-19ba-4138-a626-6f68d33a3c61-scripts\") pod \"ceilometer-0\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " pod="openstack/ceilometer-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.806631 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b480733-19ba-4138-a626-6f68d33a3c61-log-httpd\") pod \"ceilometer-0\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " pod="openstack/ceilometer-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.806888 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b480733-19ba-4138-a626-6f68d33a3c61-run-httpd\") pod \"ceilometer-0\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " pod="openstack/ceilometer-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.811094 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b480733-19ba-4138-a626-6f68d33a3c61-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " pod="openstack/ceilometer-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.811489 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b480733-19ba-4138-a626-6f68d33a3c61-scripts\") pod \"ceilometer-0\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " pod="openstack/ceilometer-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.812789 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b480733-19ba-4138-a626-6f68d33a3c61-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " pod="openstack/ceilometer-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.813362 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b480733-19ba-4138-a626-6f68d33a3c61-config-data\") pod \"ceilometer-0\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " pod="openstack/ceilometer-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.836998 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9zv8\" (UniqueName: \"kubernetes.io/projected/3b480733-19ba-4138-a626-6f68d33a3c61-kube-api-access-l9zv8\") pod \"ceilometer-0\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " pod="openstack/ceilometer-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.905513 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.908140 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34878654-09fe-4df8-ab90-2eb4a3ef4e19-logs\") pod \"glance-default-internal-api-0\" (UID: \"34878654-09fe-4df8-ab90-2eb4a3ef4e19\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.908192 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34878654-09fe-4df8-ab90-2eb4a3ef4e19-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"34878654-09fe-4df8-ab90-2eb4a3ef4e19\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.908231 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34878654-09fe-4df8-ab90-2eb4a3ef4e19-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"34878654-09fe-4df8-ab90-2eb4a3ef4e19\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.908341 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34878654-09fe-4df8-ab90-2eb4a3ef4e19-scripts\") pod \"glance-default-internal-api-0\" (UID: \"34878654-09fe-4df8-ab90-2eb4a3ef4e19\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.908372 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\") pod \"glance-default-internal-api-0\" (UID: \"34878654-09fe-4df8-ab90-2eb4a3ef4e19\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.908391 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq8f4\" (UniqueName: \"kubernetes.io/projected/34878654-09fe-4df8-ab90-2eb4a3ef4e19-kube-api-access-jq8f4\") pod \"glance-default-internal-api-0\" (UID: \"34878654-09fe-4df8-ab90-2eb4a3ef4e19\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.908411 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34878654-09fe-4df8-ab90-2eb4a3ef4e19-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"34878654-09fe-4df8-ab90-2eb4a3ef4e19\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:22:06 crc kubenswrapper[4886]: I0219 21:22:06.908450 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34878654-09fe-4df8-ab90-2eb4a3ef4e19-config-data\") pod \"glance-default-internal-api-0\" (UID: \"34878654-09fe-4df8-ab90-2eb4a3ef4e19\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:22:07 crc kubenswrapper[4886]: I0219 21:22:07.010418 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34878654-09fe-4df8-ab90-2eb4a3ef4e19-scripts\") pod \"glance-default-internal-api-0\" (UID: \"34878654-09fe-4df8-ab90-2eb4a3ef4e19\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:22:07 crc kubenswrapper[4886]: I0219 21:22:07.010499 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\") pod \"glance-default-internal-api-0\" (UID: \"34878654-09fe-4df8-ab90-2eb4a3ef4e19\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:22:07 crc kubenswrapper[4886]: I0219 21:22:07.010526 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq8f4\" (UniqueName: \"kubernetes.io/projected/34878654-09fe-4df8-ab90-2eb4a3ef4e19-kube-api-access-jq8f4\") pod \"glance-default-internal-api-0\" (UID: \"34878654-09fe-4df8-ab90-2eb4a3ef4e19\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:22:07 crc kubenswrapper[4886]: I0219 21:22:07.010546 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34878654-09fe-4df8-ab90-2eb4a3ef4e19-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"34878654-09fe-4df8-ab90-2eb4a3ef4e19\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:22:07 crc kubenswrapper[4886]: I0219 21:22:07.010588 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34878654-09fe-4df8-ab90-2eb4a3ef4e19-config-data\") pod \"glance-default-internal-api-0\" (UID: \"34878654-09fe-4df8-ab90-2eb4a3ef4e19\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:22:07 crc kubenswrapper[4886]: I0219 21:22:07.010619 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34878654-09fe-4df8-ab90-2eb4a3ef4e19-logs\") pod \"glance-default-internal-api-0\" (UID: \"34878654-09fe-4df8-ab90-2eb4a3ef4e19\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:22:07 crc kubenswrapper[4886]: I0219 21:22:07.010650 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34878654-09fe-4df8-ab90-2eb4a3ef4e19-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"34878654-09fe-4df8-ab90-2eb4a3ef4e19\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:22:07 crc kubenswrapper[4886]: I0219 21:22:07.010689 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34878654-09fe-4df8-ab90-2eb4a3ef4e19-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"34878654-09fe-4df8-ab90-2eb4a3ef4e19\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:22:07 crc kubenswrapper[4886]: I0219 21:22:07.011588 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34878654-09fe-4df8-ab90-2eb4a3ef4e19-logs\") pod \"glance-default-internal-api-0\" (UID: \"34878654-09fe-4df8-ab90-2eb4a3ef4e19\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:22:07 crc kubenswrapper[4886]: I0219 21:22:07.011586 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34878654-09fe-4df8-ab90-2eb4a3ef4e19-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"34878654-09fe-4df8-ab90-2eb4a3ef4e19\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:22:07 crc kubenswrapper[4886]: I0219 21:22:07.014873 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:22:07 crc kubenswrapper[4886]: I0219 21:22:07.014905 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\") pod \"glance-default-internal-api-0\" (UID: \"34878654-09fe-4df8-ab90-2eb4a3ef4e19\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fa954a4659b21b6248d850d29df2a32c3efe0ed6ad4129bfc4bfafd49a05e255/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 19 21:22:07 crc kubenswrapper[4886]: I0219 21:22:07.015820 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34878654-09fe-4df8-ab90-2eb4a3ef4e19-config-data\") pod \"glance-default-internal-api-0\" (UID: \"34878654-09fe-4df8-ab90-2eb4a3ef4e19\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:22:07 crc kubenswrapper[4886]: I0219 21:22:07.017453 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34878654-09fe-4df8-ab90-2eb4a3ef4e19-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"34878654-09fe-4df8-ab90-2eb4a3ef4e19\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:22:07 crc kubenswrapper[4886]: I0219 21:22:07.017637 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34878654-09fe-4df8-ab90-2eb4a3ef4e19-scripts\") pod \"glance-default-internal-api-0\" (UID: \"34878654-09fe-4df8-ab90-2eb4a3ef4e19\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:22:07 crc kubenswrapper[4886]: I0219 21:22:07.034726 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34878654-09fe-4df8-ab90-2eb4a3ef4e19-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"34878654-09fe-4df8-ab90-2eb4a3ef4e19\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:22:07 crc kubenswrapper[4886]: I0219 21:22:07.042034 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq8f4\" (UniqueName: \"kubernetes.io/projected/34878654-09fe-4df8-ab90-2eb4a3ef4e19-kube-api-access-jq8f4\") pod \"glance-default-internal-api-0\" (UID: \"34878654-09fe-4df8-ab90-2eb4a3ef4e19\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:22:07 crc kubenswrapper[4886]: I0219 21:22:07.134583 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-59d9fab8-7cd6-4599-9419-be74d0657eef\") pod \"glance-default-internal-api-0\" (UID: \"34878654-09fe-4df8-ab90-2eb4a3ef4e19\") " pod="openstack/glance-default-internal-api-0" Feb 19 21:22:07 crc kubenswrapper[4886]: I0219 21:22:07.326819 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 21:22:07 crc kubenswrapper[4886]: I0219 21:22:07.370157 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 21:22:07 crc kubenswrapper[4886]: I0219 21:22:07.371102 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45" containerName="glance-httpd" containerID="cri-o://e9893877ff85644672925b9cc2bcede1013c50e3646420cc1b7c4a97b98f1b09" gracePeriod=30 Feb 19 21:22:07 crc kubenswrapper[4886]: I0219 21:22:07.370602 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45" containerName="glance-log" containerID="cri-o://73747f99b4299a1670672efdc0f314bbf5a1a7d673de9b869d235d9a7e9d68b3" gracePeriod=30 Feb 19 21:22:07 crc kubenswrapper[4886]: I0219 21:22:07.511678 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:22:07 crc kubenswrapper[4886]: W0219 21:22:07.514836 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b480733_19ba_4138_a626_6f68d33a3c61.slice/crio-2fb66cce5ce23d14e4699d163b8e7754080f04e624c334ede24fae6be16e7cd9 WatchSource:0}: Error finding container 2fb66cce5ce23d14e4699d163b8e7754080f04e624c334ede24fae6be16e7cd9: Status 404 returned error can't find the container with id 2fb66cce5ce23d14e4699d163b8e7754080f04e624c334ede24fae6be16e7cd9 Feb 19 21:22:07 crc kubenswrapper[4886]: I0219 21:22:07.988244 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 21:22:08 crc kubenswrapper[4886]: I0219 21:22:08.195098 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-689bb999db-sc7pf" Feb 19 21:22:08 crc kubenswrapper[4886]: I0219 21:22:08.198676 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-689bb999db-sc7pf" Feb 19 21:22:08 crc kubenswrapper[4886]: I0219 21:22:08.199504 4886 scope.go:117] "RemoveContainer" containerID="af0916051d0b6b70c642d8cff0dc74491d9bdb4ec930fa2351f2718a7a4e89fb" Feb 19 21:22:08 crc kubenswrapper[4886]: E0219 21:22:08.199734 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-689bb999db-sc7pf_openstack(87cb43cc-15f0-4ea9-8731-b664027ff5c9)\"" pod="openstack/heat-api-689bb999db-sc7pf" podUID="87cb43cc-15f0-4ea9-8731-b664027ff5c9" Feb 19 21:22:08 crc kubenswrapper[4886]: I0219 21:22:08.224510 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-8459f774f5-9dr6c" Feb 19 21:22:08 crc kubenswrapper[4886]: I0219 21:22:08.224565 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-8459f774f5-9dr6c" Feb 19 21:22:08 crc kubenswrapper[4886]: I0219 21:22:08.225368 4886 scope.go:117] "RemoveContainer" containerID="a4eb4fc2f44870a8ba5e4215a0dbc8dcbdf886159c9eca4231a30acf35aa1d42" Feb 19 21:22:08 crc kubenswrapper[4886]: E0219 21:22:08.225637 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-8459f774f5-9dr6c_openstack(aefd494a-9837-494b-8bbe-76f6dfc77f5d)\"" pod="openstack/heat-cfnapi-8459f774f5-9dr6c" podUID="aefd494a-9837-494b-8bbe-76f6dfc77f5d" Feb 19 21:22:08 crc kubenswrapper[4886]: I0219 21:22:08.516727 4886 generic.go:334] "Generic (PLEG): container finished" podID="af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45" containerID="73747f99b4299a1670672efdc0f314bbf5a1a7d673de9b869d235d9a7e9d68b3" exitCode=143 Feb 19 21:22:08 crc kubenswrapper[4886]: I0219 21:22:08.517175 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45","Type":"ContainerDied","Data":"73747f99b4299a1670672efdc0f314bbf5a1a7d673de9b869d235d9a7e9d68b3"} Feb 19 21:22:08 crc kubenswrapper[4886]: I0219 21:22:08.518379 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"34878654-09fe-4df8-ab90-2eb4a3ef4e19","Type":"ContainerStarted","Data":"b73037224fbcf2e1eb8efa5aceafcf66a3f3ecf31033ea7059440b91e0366f26"} Feb 19 21:22:08 crc kubenswrapper[4886]: I0219 21:22:08.521735 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b480733-19ba-4138-a626-6f68d33a3c61","Type":"ContainerStarted","Data":"b3621c57a34b6c68a3156c124b26b634ed788971513a23e5106ecfbb5038930f"} Feb 19 21:22:08 crc kubenswrapper[4886]: I0219 21:22:08.521792 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b480733-19ba-4138-a626-6f68d33a3c61","Type":"ContainerStarted","Data":"2fb66cce5ce23d14e4699d163b8e7754080f04e624c334ede24fae6be16e7cd9"} Feb 19 21:22:08 crc kubenswrapper[4886]: I0219 21:22:08.617237 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3201a6a-0782-4c3e-b43d-89ce0f4a029c" path="/var/lib/kubelet/pods/d3201a6a-0782-4c3e-b43d-89ce0f4a029c/volumes" Feb 19 21:22:09 crc kubenswrapper[4886]: I0219 21:22:09.535483 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"34878654-09fe-4df8-ab90-2eb4a3ef4e19","Type":"ContainerStarted","Data":"e64014487b4d5be8488ded02545915c1e3fec6c94bc278cbecdd9a9cb85410e8"} Feb 19 21:22:09 crc kubenswrapper[4886]: I0219 21:22:09.536015 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"34878654-09fe-4df8-ab90-2eb4a3ef4e19","Type":"ContainerStarted","Data":"d1905701ce61ad082191983c2ca7a7861c66b8ed4bd5f315ae9dcb7f65f4d121"} Feb 19 21:22:09 crc kubenswrapper[4886]: I0219 21:22:09.540168 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b480733-19ba-4138-a626-6f68d33a3c61","Type":"ContainerStarted","Data":"ad1d26e3b0031aaa48002f3a14642503484fef4bc145c5f22f8f970750d576b0"} Feb 19 21:22:09 crc kubenswrapper[4886]: I0219 21:22:09.562180 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.56214845 podStartE2EDuration="3.56214845s" podCreationTimestamp="2026-02-19 21:22:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:22:09.557973667 +0000 UTC m=+1360.185816717" watchObservedRunningTime="2026-02-19 21:22:09.56214845 +0000 UTC m=+1360.189991500" Feb 19 21:22:10 crc kubenswrapper[4886]: I0219 21:22:10.553655 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b480733-19ba-4138-a626-6f68d33a3c61","Type":"ContainerStarted","Data":"c13f0c3b251233e24a0acb5d125b2c7954befbba53cd7aa10a33fb0cbdc1683e"} Feb 19 21:22:10 crc kubenswrapper[4886]: I0219 21:22:10.580898 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.190:9292/healthcheck\": read tcp 10.217.0.2:34608->10.217.0.190:9292: read: connection reset by peer" Feb 19 21:22:10 crc kubenswrapper[4886]: I0219 21:22:10.581427 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.190:9292/healthcheck\": read tcp 10.217.0.2:34602->10.217.0.190:9292: read: connection reset by peer" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.233498 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.262448 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-logs\") pod \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.262497 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-httpd-run\") pod \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.262553 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-config-data\") pod \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.262939 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-logs" (OuterVolumeSpecName: "logs") pod "af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45" (UID: "af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.263040 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45" (UID: "af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.263307 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\") pod \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.263354 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8pfm\" (UniqueName: \"kubernetes.io/projected/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-kube-api-access-q8pfm\") pod \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.263373 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-scripts\") pod \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.263414 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-combined-ca-bundle\") pod \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.263460 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-public-tls-certs\") pod \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\" (UID: \"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45\") " Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.264019 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-logs\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.264035 4886 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.276890 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-kube-api-access-q8pfm" (OuterVolumeSpecName: "kube-api-access-q8pfm") pod "af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45" (UID: "af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45"). InnerVolumeSpecName "kube-api-access-q8pfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.277252 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-scripts" (OuterVolumeSpecName: "scripts") pod "af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45" (UID: "af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.315575 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1" (OuterVolumeSpecName: "glance") pod "af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45" (UID: "af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45"). InnerVolumeSpecName "pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.365562 4886 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\") on node \"crc\" " Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.365785 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8pfm\" (UniqueName: \"kubernetes.io/projected/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-kube-api-access-q8pfm\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.365845 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.379621 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45" (UID: "af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.444006 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-config-data" (OuterVolumeSpecName: "config-data") pod "af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45" (UID: "af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.458422 4886 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.458559 4886 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1") on node "crc" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.465169 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45" (UID: "af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.478871 4886 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.478902 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.478912 4886 reconciler_common.go:293] "Volume detached for volume \"pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.478926 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.562513 4886 generic.go:334] "Generic (PLEG): container finished" podID="af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45" containerID="e9893877ff85644672925b9cc2bcede1013c50e3646420cc1b7c4a97b98f1b09" exitCode=0 Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.562554 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45","Type":"ContainerDied","Data":"e9893877ff85644672925b9cc2bcede1013c50e3646420cc1b7c4a97b98f1b09"} Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.562582 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45","Type":"ContainerDied","Data":"8c91043156607e89e0de683b411f1a5970c6664f8c3dbb3e7dc6c4c4864cee03"} Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.562598 4886 scope.go:117] "RemoveContainer" containerID="e9893877ff85644672925b9cc2bcede1013c50e3646420cc1b7c4a97b98f1b09" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.562722 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.600662 4886 scope.go:117] "RemoveContainer" containerID="73747f99b4299a1670672efdc0f314bbf5a1a7d673de9b869d235d9a7e9d68b3" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.603122 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.623938 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.628996 4886 scope.go:117] "RemoveContainer" containerID="e9893877ff85644672925b9cc2bcede1013c50e3646420cc1b7c4a97b98f1b09" Feb 19 21:22:11 crc kubenswrapper[4886]: E0219 21:22:11.629439 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9893877ff85644672925b9cc2bcede1013c50e3646420cc1b7c4a97b98f1b09\": container with ID starting with e9893877ff85644672925b9cc2bcede1013c50e3646420cc1b7c4a97b98f1b09 not found: ID does not exist" containerID="e9893877ff85644672925b9cc2bcede1013c50e3646420cc1b7c4a97b98f1b09" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.629470 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9893877ff85644672925b9cc2bcede1013c50e3646420cc1b7c4a97b98f1b09"} err="failed to get container status \"e9893877ff85644672925b9cc2bcede1013c50e3646420cc1b7c4a97b98f1b09\": rpc error: code = NotFound desc = could not find container \"e9893877ff85644672925b9cc2bcede1013c50e3646420cc1b7c4a97b98f1b09\": container with ID starting with e9893877ff85644672925b9cc2bcede1013c50e3646420cc1b7c4a97b98f1b09 not found: ID does not exist" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.629496 4886 scope.go:117] "RemoveContainer" containerID="73747f99b4299a1670672efdc0f314bbf5a1a7d673de9b869d235d9a7e9d68b3" Feb 19 21:22:11 crc kubenswrapper[4886]: E0219 21:22:11.629847 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73747f99b4299a1670672efdc0f314bbf5a1a7d673de9b869d235d9a7e9d68b3\": container with ID starting with 73747f99b4299a1670672efdc0f314bbf5a1a7d673de9b869d235d9a7e9d68b3 not found: ID does not exist" containerID="73747f99b4299a1670672efdc0f314bbf5a1a7d673de9b869d235d9a7e9d68b3" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.629873 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73747f99b4299a1670672efdc0f314bbf5a1a7d673de9b869d235d9a7e9d68b3"} err="failed to get container status \"73747f99b4299a1670672efdc0f314bbf5a1a7d673de9b869d235d9a7e9d68b3\": rpc error: code = NotFound desc = could not find container \"73747f99b4299a1670672efdc0f314bbf5a1a7d673de9b869d235d9a7e9d68b3\": container with ID starting with 73747f99b4299a1670672efdc0f314bbf5a1a7d673de9b869d235d9a7e9d68b3 not found: ID does not exist" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.634835 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 21:22:11 crc kubenswrapper[4886]: E0219 21:22:11.635300 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45" containerName="glance-log" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.635316 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45" containerName="glance-log" Feb 19 21:22:11 crc kubenswrapper[4886]: E0219 21:22:11.635331 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45" containerName="glance-httpd" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.635338 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45" containerName="glance-httpd" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.635569 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45" containerName="glance-log" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.635583 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45" containerName="glance-httpd" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.636745 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.639588 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.639953 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.663331 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.682645 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99326eb3-57ca-429e-a411-90a6c2a63d25-config-data\") pod \"glance-default-external-api-0\" (UID: \"99326eb3-57ca-429e-a411-90a6c2a63d25\") " pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.682861 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/99326eb3-57ca-429e-a411-90a6c2a63d25-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"99326eb3-57ca-429e-a411-90a6c2a63d25\") " pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.682960 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99326eb3-57ca-429e-a411-90a6c2a63d25-scripts\") pod \"glance-default-external-api-0\" (UID: \"99326eb3-57ca-429e-a411-90a6c2a63d25\") " pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.683040 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99326eb3-57ca-429e-a411-90a6c2a63d25-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"99326eb3-57ca-429e-a411-90a6c2a63d25\") " pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.683162 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\") pod \"glance-default-external-api-0\" (UID: \"99326eb3-57ca-429e-a411-90a6c2a63d25\") " pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.683349 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99326eb3-57ca-429e-a411-90a6c2a63d25-logs\") pod \"glance-default-external-api-0\" (UID: \"99326eb3-57ca-429e-a411-90a6c2a63d25\") " pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.683427 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/99326eb3-57ca-429e-a411-90a6c2a63d25-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"99326eb3-57ca-429e-a411-90a6c2a63d25\") " pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.683504 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzv8h\" (UniqueName: \"kubernetes.io/projected/99326eb3-57ca-429e-a411-90a6c2a63d25-kube-api-access-wzv8h\") pod \"glance-default-external-api-0\" (UID: \"99326eb3-57ca-429e-a411-90a6c2a63d25\") " pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.785089 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99326eb3-57ca-429e-a411-90a6c2a63d25-logs\") pod \"glance-default-external-api-0\" (UID: \"99326eb3-57ca-429e-a411-90a6c2a63d25\") " pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.785138 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/99326eb3-57ca-429e-a411-90a6c2a63d25-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"99326eb3-57ca-429e-a411-90a6c2a63d25\") " pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.785170 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzv8h\" (UniqueName: \"kubernetes.io/projected/99326eb3-57ca-429e-a411-90a6c2a63d25-kube-api-access-wzv8h\") pod \"glance-default-external-api-0\" (UID: \"99326eb3-57ca-429e-a411-90a6c2a63d25\") " pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.785231 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99326eb3-57ca-429e-a411-90a6c2a63d25-config-data\") pod \"glance-default-external-api-0\" (UID: \"99326eb3-57ca-429e-a411-90a6c2a63d25\") " pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.785250 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/99326eb3-57ca-429e-a411-90a6c2a63d25-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"99326eb3-57ca-429e-a411-90a6c2a63d25\") " pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.785302 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99326eb3-57ca-429e-a411-90a6c2a63d25-scripts\") pod \"glance-default-external-api-0\" (UID: \"99326eb3-57ca-429e-a411-90a6c2a63d25\") " pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.785333 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99326eb3-57ca-429e-a411-90a6c2a63d25-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"99326eb3-57ca-429e-a411-90a6c2a63d25\") " pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.785402 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\") pod \"glance-default-external-api-0\" (UID: \"99326eb3-57ca-429e-a411-90a6c2a63d25\") " pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.786153 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99326eb3-57ca-429e-a411-90a6c2a63d25-logs\") pod \"glance-default-external-api-0\" (UID: \"99326eb3-57ca-429e-a411-90a6c2a63d25\") " pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.786167 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/99326eb3-57ca-429e-a411-90a6c2a63d25-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"99326eb3-57ca-429e-a411-90a6c2a63d25\") " pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.787402 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.787435 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\") pod \"glance-default-external-api-0\" (UID: \"99326eb3-57ca-429e-a411-90a6c2a63d25\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/24501a7431bc13ebaff906718e6fc5aea919dc38d675793dfa72a2f1e9cb67ce/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.790894 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99326eb3-57ca-429e-a411-90a6c2a63d25-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"99326eb3-57ca-429e-a411-90a6c2a63d25\") " pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.791642 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99326eb3-57ca-429e-a411-90a6c2a63d25-config-data\") pod \"glance-default-external-api-0\" (UID: \"99326eb3-57ca-429e-a411-90a6c2a63d25\") " pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.792897 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99326eb3-57ca-429e-a411-90a6c2a63d25-scripts\") pod \"glance-default-external-api-0\" (UID: \"99326eb3-57ca-429e-a411-90a6c2a63d25\") " pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.792874 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/99326eb3-57ca-429e-a411-90a6c2a63d25-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"99326eb3-57ca-429e-a411-90a6c2a63d25\") " pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.801868 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzv8h\" (UniqueName: \"kubernetes.io/projected/99326eb3-57ca-429e-a411-90a6c2a63d25-kube-api-access-wzv8h\") pod \"glance-default-external-api-0\" (UID: \"99326eb3-57ca-429e-a411-90a6c2a63d25\") " pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.844485 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2c4691-8a1d-44bb-a4d2-05cc41426fb1\") pod \"glance-default-external-api-0\" (UID: \"99326eb3-57ca-429e-a411-90a6c2a63d25\") " pod="openstack/glance-default-external-api-0" Feb 19 21:22:11 crc kubenswrapper[4886]: I0219 21:22:11.959295 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 21:22:12 crc kubenswrapper[4886]: I0219 21:22:12.518655 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6d7bdc66c5-25hmr" Feb 19 21:22:12 crc kubenswrapper[4886]: I0219 21:22:12.699656 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45" path="/var/lib/kubelet/pods/af2a6d96-8a2b-4e88-a4e3-00ef5fb74e45/volumes" Feb 19 21:22:12 crc kubenswrapper[4886]: I0219 21:22:12.701885 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-689bb999db-sc7pf"] Feb 19 21:22:12 crc kubenswrapper[4886]: I0219 21:22:12.701932 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 21:22:12 crc kubenswrapper[4886]: I0219 21:22:12.701950 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b480733-19ba-4138-a626-6f68d33a3c61","Type":"ContainerStarted","Data":"59d3665e57fe2f72500b9fd5368eacb7f86b05e83b1f81aae8da4ade5deef785"} Feb 19 21:22:12 crc kubenswrapper[4886]: I0219 21:22:12.777779 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.194012622 podStartE2EDuration="6.777758573s" podCreationTimestamp="2026-02-19 21:22:06 +0000 UTC" firstStartedPulling="2026-02-19 21:22:07.524497392 +0000 UTC m=+1358.152340442" lastFinishedPulling="2026-02-19 21:22:12.108243343 +0000 UTC m=+1362.736086393" observedRunningTime="2026-02-19 21:22:12.713682053 +0000 UTC m=+1363.341525103" watchObservedRunningTime="2026-02-19 21:22:12.777758573 +0000 UTC m=+1363.405601623" Feb 19 21:22:12 crc kubenswrapper[4886]: I0219 21:22:12.807589 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 21:22:12 crc kubenswrapper[4886]: I0219 21:22:12.857592 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-7b8b45588b-z55k4" Feb 19 21:22:12 crc kubenswrapper[4886]: I0219 21:22:12.931052 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-8459f774f5-9dr6c"] Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.223252 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-76667cbdb5-lcq2d" Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.313323 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6468c4987f-44r29"] Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.313588 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-6468c4987f-44r29" podUID="bf0e9257-4c83-4e36-803a-5b85d9cb5e11" containerName="heat-engine" containerID="cri-o://bbc3e8ba189c0315a4442b7a1e8d2fad49846dad99116c45816d751f67e9e235" gracePeriod=60 Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.548654 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-689bb999db-sc7pf" Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.606827 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8459f774f5-9dr6c" Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.680277 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87cb43cc-15f0-4ea9-8731-b664027ff5c9-combined-ca-bundle\") pod \"87cb43cc-15f0-4ea9-8731-b664027ff5c9\" (UID: \"87cb43cc-15f0-4ea9-8731-b664027ff5c9\") " Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.680384 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6ms9\" (UniqueName: \"kubernetes.io/projected/87cb43cc-15f0-4ea9-8731-b664027ff5c9-kube-api-access-w6ms9\") pod \"87cb43cc-15f0-4ea9-8731-b664027ff5c9\" (UID: \"87cb43cc-15f0-4ea9-8731-b664027ff5c9\") " Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.680457 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87cb43cc-15f0-4ea9-8731-b664027ff5c9-config-data-custom\") pod \"87cb43cc-15f0-4ea9-8731-b664027ff5c9\" (UID: \"87cb43cc-15f0-4ea9-8731-b664027ff5c9\") " Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.680646 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87cb43cc-15f0-4ea9-8731-b664027ff5c9-config-data\") pod \"87cb43cc-15f0-4ea9-8731-b664027ff5c9\" (UID: \"87cb43cc-15f0-4ea9-8731-b664027ff5c9\") " Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.687404 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cb43cc-15f0-4ea9-8731-b664027ff5c9-kube-api-access-w6ms9" (OuterVolumeSpecName: "kube-api-access-w6ms9") pod "87cb43cc-15f0-4ea9-8731-b664027ff5c9" (UID: "87cb43cc-15f0-4ea9-8731-b664027ff5c9"). InnerVolumeSpecName "kube-api-access-w6ms9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.700650 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cb43cc-15f0-4ea9-8731-b664027ff5c9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "87cb43cc-15f0-4ea9-8731-b664027ff5c9" (UID: "87cb43cc-15f0-4ea9-8731-b664027ff5c9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.717544 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-689bb999db-sc7pf" event={"ID":"87cb43cc-15f0-4ea9-8731-b664027ff5c9","Type":"ContainerDied","Data":"eb2c6d7ba1ff6b4efa0fcd21d3ef86bb869c96d0f664b6d5ab0ae7aea3f4cb8f"} Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.717609 4886 scope.go:117] "RemoveContainer" containerID="af0916051d0b6b70c642d8cff0dc74491d9bdb4ec930fa2351f2718a7a4e89fb" Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.718422 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-689bb999db-sc7pf" Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.739474 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"99326eb3-57ca-429e-a411-90a6c2a63d25","Type":"ContainerStarted","Data":"c3ed4d2e89b9159f84ab0de338eb3c512aa7036d027cb6be642c4c062211e96e"} Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.743884 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8459f774f5-9dr6c" Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.744897 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8459f774f5-9dr6c" event={"ID":"aefd494a-9837-494b-8bbe-76f6dfc77f5d","Type":"ContainerDied","Data":"dd67d620b65886295d82aec1355291e66246b3215cb80f74655398828240ecff"} Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.782467 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aefd494a-9837-494b-8bbe-76f6dfc77f5d-config-data\") pod \"aefd494a-9837-494b-8bbe-76f6dfc77f5d\" (UID: \"aefd494a-9837-494b-8bbe-76f6dfc77f5d\") " Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.782990 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8v8m\" (UniqueName: \"kubernetes.io/projected/aefd494a-9837-494b-8bbe-76f6dfc77f5d-kube-api-access-t8v8m\") pod \"aefd494a-9837-494b-8bbe-76f6dfc77f5d\" (UID: \"aefd494a-9837-494b-8bbe-76f6dfc77f5d\") " Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.783122 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aefd494a-9837-494b-8bbe-76f6dfc77f5d-combined-ca-bundle\") pod \"aefd494a-9837-494b-8bbe-76f6dfc77f5d\" (UID: \"aefd494a-9837-494b-8bbe-76f6dfc77f5d\") " Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.783232 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aefd494a-9837-494b-8bbe-76f6dfc77f5d-config-data-custom\") pod \"aefd494a-9837-494b-8bbe-76f6dfc77f5d\" (UID: \"aefd494a-9837-494b-8bbe-76f6dfc77f5d\") " Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.784007 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6ms9\" (UniqueName: \"kubernetes.io/projected/87cb43cc-15f0-4ea9-8731-b664027ff5c9-kube-api-access-w6ms9\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.785068 4886 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87cb43cc-15f0-4ea9-8731-b664027ff5c9-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.784206 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cb43cc-15f0-4ea9-8731-b664027ff5c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87cb43cc-15f0-4ea9-8731-b664027ff5c9" (UID: "87cb43cc-15f0-4ea9-8731-b664027ff5c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.792432 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cb43cc-15f0-4ea9-8731-b664027ff5c9-config-data" (OuterVolumeSpecName: "config-data") pod "87cb43cc-15f0-4ea9-8731-b664027ff5c9" (UID: "87cb43cc-15f0-4ea9-8731-b664027ff5c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.795279 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aefd494a-9837-494b-8bbe-76f6dfc77f5d-kube-api-access-t8v8m" (OuterVolumeSpecName: "kube-api-access-t8v8m") pod "aefd494a-9837-494b-8bbe-76f6dfc77f5d" (UID: "aefd494a-9837-494b-8bbe-76f6dfc77f5d"). InnerVolumeSpecName "kube-api-access-t8v8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.797092 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aefd494a-9837-494b-8bbe-76f6dfc77f5d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "aefd494a-9837-494b-8bbe-76f6dfc77f5d" (UID: "aefd494a-9837-494b-8bbe-76f6dfc77f5d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.822557 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aefd494a-9837-494b-8bbe-76f6dfc77f5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aefd494a-9837-494b-8bbe-76f6dfc77f5d" (UID: "aefd494a-9837-494b-8bbe-76f6dfc77f5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.871298 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aefd494a-9837-494b-8bbe-76f6dfc77f5d-config-data" (OuterVolumeSpecName: "config-data") pod "aefd494a-9837-494b-8bbe-76f6dfc77f5d" (UID: "aefd494a-9837-494b-8bbe-76f6dfc77f5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.887846 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8v8m\" (UniqueName: \"kubernetes.io/projected/aefd494a-9837-494b-8bbe-76f6dfc77f5d-kube-api-access-t8v8m\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.887886 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aefd494a-9837-494b-8bbe-76f6dfc77f5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.887899 4886 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aefd494a-9837-494b-8bbe-76f6dfc77f5d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.887911 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87cb43cc-15f0-4ea9-8731-b664027ff5c9-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.887924 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aefd494a-9837-494b-8bbe-76f6dfc77f5d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.887937 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87cb43cc-15f0-4ea9-8731-b664027ff5c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:13 crc kubenswrapper[4886]: I0219 21:22:13.971653 4886 scope.go:117] "RemoveContainer" containerID="a4eb4fc2f44870a8ba5e4215a0dbc8dcbdf886159c9eca4231a30acf35aa1d42" Feb 19 21:22:14 crc kubenswrapper[4886]: I0219 21:22:14.091710 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-689bb999db-sc7pf"] Feb 19 21:22:14 crc kubenswrapper[4886]: I0219 21:22:14.102744 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-689bb999db-sc7pf"] Feb 19 21:22:14 crc kubenswrapper[4886]: I0219 21:22:14.144332 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-8459f774f5-9dr6c"] Feb 19 21:22:14 crc kubenswrapper[4886]: I0219 21:22:14.165573 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-8459f774f5-9dr6c"] Feb 19 21:22:14 crc kubenswrapper[4886]: I0219 21:22:14.184234 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:22:14 crc kubenswrapper[4886]: I0219 21:22:14.612588 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cb43cc-15f0-4ea9-8731-b664027ff5c9" path="/var/lib/kubelet/pods/87cb43cc-15f0-4ea9-8731-b664027ff5c9/volumes" Feb 19 21:22:14 crc kubenswrapper[4886]: I0219 21:22:14.613215 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aefd494a-9837-494b-8bbe-76f6dfc77f5d" path="/var/lib/kubelet/pods/aefd494a-9837-494b-8bbe-76f6dfc77f5d/volumes" Feb 19 21:22:14 crc kubenswrapper[4886]: I0219 21:22:14.755463 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"99326eb3-57ca-429e-a411-90a6c2a63d25","Type":"ContainerStarted","Data":"f961e808bf393749ba296895b31c84507376fd8d939c7f1b4b39cc654c693c08"} Feb 19 21:22:14 crc kubenswrapper[4886]: I0219 21:22:14.755736 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"99326eb3-57ca-429e-a411-90a6c2a63d25","Type":"ContainerStarted","Data":"f640eedc2b8d299701955a52084218eaf430d5f6e03c0b084f587555040fd769"} Feb 19 21:22:14 crc kubenswrapper[4886]: I0219 21:22:14.757272 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b480733-19ba-4138-a626-6f68d33a3c61" containerName="ceilometer-central-agent" containerID="cri-o://b3621c57a34b6c68a3156c124b26b634ed788971513a23e5106ecfbb5038930f" gracePeriod=30 Feb 19 21:22:14 crc kubenswrapper[4886]: I0219 21:22:14.757499 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b480733-19ba-4138-a626-6f68d33a3c61" containerName="proxy-httpd" containerID="cri-o://59d3665e57fe2f72500b9fd5368eacb7f86b05e83b1f81aae8da4ade5deef785" gracePeriod=30 Feb 19 21:22:14 crc kubenswrapper[4886]: I0219 21:22:14.757544 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b480733-19ba-4138-a626-6f68d33a3c61" containerName="sg-core" containerID="cri-o://c13f0c3b251233e24a0acb5d125b2c7954befbba53cd7aa10a33fb0cbdc1683e" gracePeriod=30 Feb 19 21:22:14 crc kubenswrapper[4886]: I0219 21:22:14.757573 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b480733-19ba-4138-a626-6f68d33a3c61" containerName="ceilometer-notification-agent" containerID="cri-o://ad1d26e3b0031aaa48002f3a14642503484fef4bc145c5f22f8f970750d576b0" gracePeriod=30 Feb 19 21:22:14 crc kubenswrapper[4886]: I0219 21:22:14.801845 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.801823995 podStartE2EDuration="3.801823995s" podCreationTimestamp="2026-02-19 21:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:22:14.784543959 +0000 UTC m=+1365.412386999" watchObservedRunningTime="2026-02-19 21:22:14.801823995 +0000 UTC m=+1365.429667035" Feb 19 21:22:15 crc kubenswrapper[4886]: I0219 21:22:15.513184 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 19 21:22:15 crc kubenswrapper[4886]: I0219 21:22:15.772826 4886 generic.go:334] "Generic (PLEG): container finished" podID="3b480733-19ba-4138-a626-6f68d33a3c61" containerID="59d3665e57fe2f72500b9fd5368eacb7f86b05e83b1f81aae8da4ade5deef785" exitCode=0 Feb 19 21:22:15 crc kubenswrapper[4886]: I0219 21:22:15.772855 4886 generic.go:334] "Generic (PLEG): container finished" podID="3b480733-19ba-4138-a626-6f68d33a3c61" containerID="c13f0c3b251233e24a0acb5d125b2c7954befbba53cd7aa10a33fb0cbdc1683e" exitCode=2 Feb 19 21:22:15 crc kubenswrapper[4886]: I0219 21:22:15.772863 4886 generic.go:334] "Generic (PLEG): container finished" podID="3b480733-19ba-4138-a626-6f68d33a3c61" containerID="ad1d26e3b0031aaa48002f3a14642503484fef4bc145c5f22f8f970750d576b0" exitCode=0 Feb 19 21:22:15 crc kubenswrapper[4886]: I0219 21:22:15.772898 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b480733-19ba-4138-a626-6f68d33a3c61","Type":"ContainerDied","Data":"59d3665e57fe2f72500b9fd5368eacb7f86b05e83b1f81aae8da4ade5deef785"} Feb 19 21:22:15 crc kubenswrapper[4886]: I0219 21:22:15.772922 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b480733-19ba-4138-a626-6f68d33a3c61","Type":"ContainerDied","Data":"c13f0c3b251233e24a0acb5d125b2c7954befbba53cd7aa10a33fb0cbdc1683e"} Feb 19 21:22:15 crc kubenswrapper[4886]: I0219 21:22:15.772933 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b480733-19ba-4138-a626-6f68d33a3c61","Type":"ContainerDied","Data":"ad1d26e3b0031aaa48002f3a14642503484fef4bc145c5f22f8f970750d576b0"} Feb 19 21:22:15 crc kubenswrapper[4886]: I0219 21:22:15.775216 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9103da3a-b4d6-412c-a7f7-4ccc5980e8f6","Type":"ContainerStarted","Data":"4267910d9e6416cd231f1b7bac344d6fbe77a82acbf02edeadb3e94aab9e3953"} Feb 19 21:22:15 crc kubenswrapper[4886]: I0219 21:22:15.797915 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.585840251 podStartE2EDuration="38.797898086s" podCreationTimestamp="2026-02-19 21:21:37 +0000 UTC" firstStartedPulling="2026-02-19 21:21:38.823432252 +0000 UTC m=+1329.451275302" lastFinishedPulling="2026-02-19 21:22:15.035490087 +0000 UTC m=+1365.663333137" observedRunningTime="2026-02-19 21:22:15.790185746 +0000 UTC m=+1366.418028796" watchObservedRunningTime="2026-02-19 21:22:15.797898086 +0000 UTC m=+1366.425741136" Feb 19 21:22:16 crc kubenswrapper[4886]: E0219 21:22:16.106677 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bbc3e8ba189c0315a4442b7a1e8d2fad49846dad99116c45816d751f67e9e235" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 19 21:22:16 crc kubenswrapper[4886]: E0219 21:22:16.109788 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bbc3e8ba189c0315a4442b7a1e8d2fad49846dad99116c45816d751f67e9e235" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 19 21:22:16 crc kubenswrapper[4886]: E0219 21:22:16.111800 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bbc3e8ba189c0315a4442b7a1e8d2fad49846dad99116c45816d751f67e9e235" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 19 21:22:16 crc kubenswrapper[4886]: E0219 21:22:16.111848 4886 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6468c4987f-44r29" podUID="bf0e9257-4c83-4e36-803a-5b85d9cb5e11" containerName="heat-engine" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.211234 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-m9xgr"] Feb 19 21:22:17 crc kubenswrapper[4886]: E0219 21:22:17.211957 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87cb43cc-15f0-4ea9-8731-b664027ff5c9" containerName="heat-api" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.211969 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="87cb43cc-15f0-4ea9-8731-b664027ff5c9" containerName="heat-api" Feb 19 21:22:17 crc kubenswrapper[4886]: E0219 21:22:17.211986 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aefd494a-9837-494b-8bbe-76f6dfc77f5d" containerName="heat-cfnapi" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.211992 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="aefd494a-9837-494b-8bbe-76f6dfc77f5d" containerName="heat-cfnapi" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.212203 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="aefd494a-9837-494b-8bbe-76f6dfc77f5d" containerName="heat-cfnapi" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.212214 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="87cb43cc-15f0-4ea9-8731-b664027ff5c9" containerName="heat-api" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.212230 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="87cb43cc-15f0-4ea9-8731-b664027ff5c9" containerName="heat-api" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.212242 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="aefd494a-9837-494b-8bbe-76f6dfc77f5d" containerName="heat-cfnapi" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.213010 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-m9xgr" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.238069 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-m9xgr"] Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.328343 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.328404 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.357377 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-001b-account-create-update-56t7m"] Feb 19 21:22:17 crc kubenswrapper[4886]: E0219 21:22:17.358233 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87cb43cc-15f0-4ea9-8731-b664027ff5c9" containerName="heat-api" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.358270 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="87cb43cc-15f0-4ea9-8731-b664027ff5c9" containerName="heat-api" Feb 19 21:22:17 crc kubenswrapper[4886]: E0219 21:22:17.358291 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aefd494a-9837-494b-8bbe-76f6dfc77f5d" containerName="heat-cfnapi" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.358299 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="aefd494a-9837-494b-8bbe-76f6dfc77f5d" containerName="heat-cfnapi" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.359540 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-001b-account-create-update-56t7m" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.362132 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.393785 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.400021 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecb05e42-f78a-402a-9ea7-50fd859e9b29-operator-scripts\") pod \"nova-api-db-create-m9xgr\" (UID: \"ecb05e42-f78a-402a-9ea7-50fd859e9b29\") " pod="openstack/nova-api-db-create-m9xgr" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.400224 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dphh\" (UniqueName: \"kubernetes.io/projected/ecb05e42-f78a-402a-9ea7-50fd859e9b29-kube-api-access-5dphh\") pod \"nova-api-db-create-m9xgr\" (UID: \"ecb05e42-f78a-402a-9ea7-50fd859e9b29\") " pod="openstack/nova-api-db-create-m9xgr" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.415718 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.452644 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-x9bqh"] Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.480039 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-x9bqh" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.502615 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf75e0d9-f5a5-47ab-af09-0d0321b1cacc-operator-scripts\") pod \"nova-api-001b-account-create-update-56t7m\" (UID: \"cf75e0d9-f5a5-47ab-af09-0d0321b1cacc\") " pod="openstack/nova-api-001b-account-create-update-56t7m" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.502664 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96dpj\" (UniqueName: \"kubernetes.io/projected/cf75e0d9-f5a5-47ab-af09-0d0321b1cacc-kube-api-access-96dpj\") pod \"nova-api-001b-account-create-update-56t7m\" (UID: \"cf75e0d9-f5a5-47ab-af09-0d0321b1cacc\") " pod="openstack/nova-api-001b-account-create-update-56t7m" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.502829 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecb05e42-f78a-402a-9ea7-50fd859e9b29-operator-scripts\") pod \"nova-api-db-create-m9xgr\" (UID: \"ecb05e42-f78a-402a-9ea7-50fd859e9b29\") " pod="openstack/nova-api-db-create-m9xgr" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.502934 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dphh\" (UniqueName: \"kubernetes.io/projected/ecb05e42-f78a-402a-9ea7-50fd859e9b29-kube-api-access-5dphh\") pod \"nova-api-db-create-m9xgr\" (UID: \"ecb05e42-f78a-402a-9ea7-50fd859e9b29\") " pod="openstack/nova-api-db-create-m9xgr" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.504977 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecb05e42-f78a-402a-9ea7-50fd859e9b29-operator-scripts\") pod \"nova-api-db-create-m9xgr\" (UID: \"ecb05e42-f78a-402a-9ea7-50fd859e9b29\") " pod="openstack/nova-api-db-create-m9xgr" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.538327 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-x9bqh"] Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.573790 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dphh\" (UniqueName: \"kubernetes.io/projected/ecb05e42-f78a-402a-9ea7-50fd859e9b29-kube-api-access-5dphh\") pod \"nova-api-db-create-m9xgr\" (UID: \"ecb05e42-f78a-402a-9ea7-50fd859e9b29\") " pod="openstack/nova-api-db-create-m9xgr" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.604113 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-m9xgr" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.606622 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmkhq\" (UniqueName: \"kubernetes.io/projected/46ceffae-e399-457a-80fe-152a1a143641-kube-api-access-hmkhq\") pod \"nova-cell0-db-create-x9bqh\" (UID: \"46ceffae-e399-457a-80fe-152a1a143641\") " pod="openstack/nova-cell0-db-create-x9bqh" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.606718 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46ceffae-e399-457a-80fe-152a1a143641-operator-scripts\") pod \"nova-cell0-db-create-x9bqh\" (UID: \"46ceffae-e399-457a-80fe-152a1a143641\") " pod="openstack/nova-cell0-db-create-x9bqh" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.606750 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf75e0d9-f5a5-47ab-af09-0d0321b1cacc-operator-scripts\") pod \"nova-api-001b-account-create-update-56t7m\" (UID: \"cf75e0d9-f5a5-47ab-af09-0d0321b1cacc\") " pod="openstack/nova-api-001b-account-create-update-56t7m" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.606768 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96dpj\" (UniqueName: \"kubernetes.io/projected/cf75e0d9-f5a5-47ab-af09-0d0321b1cacc-kube-api-access-96dpj\") pod \"nova-api-001b-account-create-update-56t7m\" (UID: \"cf75e0d9-f5a5-47ab-af09-0d0321b1cacc\") " pod="openstack/nova-api-001b-account-create-update-56t7m" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.612616 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf75e0d9-f5a5-47ab-af09-0d0321b1cacc-operator-scripts\") pod \"nova-api-001b-account-create-update-56t7m\" (UID: \"cf75e0d9-f5a5-47ab-af09-0d0321b1cacc\") " pod="openstack/nova-api-001b-account-create-update-56t7m" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.647783 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-v4wd4"] Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.649472 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-v4wd4" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.667674 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96dpj\" (UniqueName: \"kubernetes.io/projected/cf75e0d9-f5a5-47ab-af09-0d0321b1cacc-kube-api-access-96dpj\") pod \"nova-api-001b-account-create-update-56t7m\" (UID: \"cf75e0d9-f5a5-47ab-af09-0d0321b1cacc\") " pod="openstack/nova-api-001b-account-create-update-56t7m" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.681019 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-001b-account-create-update-56t7m" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.688156 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-001b-account-create-update-56t7m"] Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.698859 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-c948-account-create-update-hjlvn"] Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.700819 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c948-account-create-update-hjlvn" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.708684 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.711243 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11a70275-beb0-4c27-8e45-b8a8143c34a2-operator-scripts\") pod \"nova-cell1-db-create-v4wd4\" (UID: \"11a70275-beb0-4c27-8e45-b8a8143c34a2\") " pod="openstack/nova-cell1-db-create-v4wd4" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.711320 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmkhq\" (UniqueName: \"kubernetes.io/projected/46ceffae-e399-457a-80fe-152a1a143641-kube-api-access-hmkhq\") pod \"nova-cell0-db-create-x9bqh\" (UID: \"46ceffae-e399-457a-80fe-152a1a143641\") " pod="openstack/nova-cell0-db-create-x9bqh" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.711398 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46ceffae-e399-457a-80fe-152a1a143641-operator-scripts\") pod \"nova-cell0-db-create-x9bqh\" (UID: \"46ceffae-e399-457a-80fe-152a1a143641\") " pod="openstack/nova-cell0-db-create-x9bqh" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.711529 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mlf6\" (UniqueName: \"kubernetes.io/projected/11a70275-beb0-4c27-8e45-b8a8143c34a2-kube-api-access-2mlf6\") pod \"nova-cell1-db-create-v4wd4\" (UID: \"11a70275-beb0-4c27-8e45-b8a8143c34a2\") " pod="openstack/nova-cell1-db-create-v4wd4" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.725686 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46ceffae-e399-457a-80fe-152a1a143641-operator-scripts\") pod \"nova-cell0-db-create-x9bqh\" (UID: \"46ceffae-e399-457a-80fe-152a1a143641\") " pod="openstack/nova-cell0-db-create-x9bqh" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.760461 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmkhq\" (UniqueName: \"kubernetes.io/projected/46ceffae-e399-457a-80fe-152a1a143641-kube-api-access-hmkhq\") pod \"nova-cell0-db-create-x9bqh\" (UID: \"46ceffae-e399-457a-80fe-152a1a143641\") " pod="openstack/nova-cell0-db-create-x9bqh" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.800942 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-c948-account-create-update-hjlvn"] Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.813781 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-x9bqh" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.814593 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zncj4\" (UniqueName: \"kubernetes.io/projected/23217e5b-f7df-460d-abd9-ada19eeb839a-kube-api-access-zncj4\") pod \"nova-cell0-c948-account-create-update-hjlvn\" (UID: \"23217e5b-f7df-460d-abd9-ada19eeb839a\") " pod="openstack/nova-cell0-c948-account-create-update-hjlvn" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.814705 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23217e5b-f7df-460d-abd9-ada19eeb839a-operator-scripts\") pod \"nova-cell0-c948-account-create-update-hjlvn\" (UID: \"23217e5b-f7df-460d-abd9-ada19eeb839a\") " pod="openstack/nova-cell0-c948-account-create-update-hjlvn" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.814814 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mlf6\" (UniqueName: \"kubernetes.io/projected/11a70275-beb0-4c27-8e45-b8a8143c34a2-kube-api-access-2mlf6\") pod \"nova-cell1-db-create-v4wd4\" (UID: \"11a70275-beb0-4c27-8e45-b8a8143c34a2\") " pod="openstack/nova-cell1-db-create-v4wd4" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.814842 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11a70275-beb0-4c27-8e45-b8a8143c34a2-operator-scripts\") pod \"nova-cell1-db-create-v4wd4\" (UID: \"11a70275-beb0-4c27-8e45-b8a8143c34a2\") " pod="openstack/nova-cell1-db-create-v4wd4" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.815714 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11a70275-beb0-4c27-8e45-b8a8143c34a2-operator-scripts\") pod \"nova-cell1-db-create-v4wd4\" (UID: \"11a70275-beb0-4c27-8e45-b8a8143c34a2\") " pod="openstack/nova-cell1-db-create-v4wd4" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.841660 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-v4wd4"] Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.849214 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.849244 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.883789 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mlf6\" (UniqueName: \"kubernetes.io/projected/11a70275-beb0-4c27-8e45-b8a8143c34a2-kube-api-access-2mlf6\") pod \"nova-cell1-db-create-v4wd4\" (UID: \"11a70275-beb0-4c27-8e45-b8a8143c34a2\") " pod="openstack/nova-cell1-db-create-v4wd4" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.938363 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cd2f-account-create-update-hcj7k"] Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.939553 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zncj4\" (UniqueName: \"kubernetes.io/projected/23217e5b-f7df-460d-abd9-ada19eeb839a-kube-api-access-zncj4\") pod \"nova-cell0-c948-account-create-update-hjlvn\" (UID: \"23217e5b-f7df-460d-abd9-ada19eeb839a\") " pod="openstack/nova-cell0-c948-account-create-update-hjlvn" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.939637 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23217e5b-f7df-460d-abd9-ada19eeb839a-operator-scripts\") pod \"nova-cell0-c948-account-create-update-hjlvn\" (UID: \"23217e5b-f7df-460d-abd9-ada19eeb839a\") " pod="openstack/nova-cell0-c948-account-create-update-hjlvn" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.939804 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cd2f-account-create-update-hcj7k" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.940479 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23217e5b-f7df-460d-abd9-ada19eeb839a-operator-scripts\") pod \"nova-cell0-c948-account-create-update-hjlvn\" (UID: \"23217e5b-f7df-460d-abd9-ada19eeb839a\") " pod="openstack/nova-cell0-c948-account-create-update-hjlvn" Feb 19 21:22:17 crc kubenswrapper[4886]: I0219 21:22:17.948670 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 19 21:22:18 crc kubenswrapper[4886]: I0219 21:22:18.003064 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cd2f-account-create-update-hcj7k"] Feb 19 21:22:18 crc kubenswrapper[4886]: I0219 21:22:18.019451 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zncj4\" (UniqueName: \"kubernetes.io/projected/23217e5b-f7df-460d-abd9-ada19eeb839a-kube-api-access-zncj4\") pod \"nova-cell0-c948-account-create-update-hjlvn\" (UID: \"23217e5b-f7df-460d-abd9-ada19eeb839a\") " pod="openstack/nova-cell0-c948-account-create-update-hjlvn" Feb 19 21:22:18 crc kubenswrapper[4886]: I0219 21:22:18.042002 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c79h4\" (UniqueName: \"kubernetes.io/projected/e6f6e153-c56c-4b3f-8c0c-457f163ac959-kube-api-access-c79h4\") pod \"nova-cell1-cd2f-account-create-update-hcj7k\" (UID: \"e6f6e153-c56c-4b3f-8c0c-457f163ac959\") " pod="openstack/nova-cell1-cd2f-account-create-update-hcj7k" Feb 19 21:22:18 crc kubenswrapper[4886]: I0219 21:22:18.042409 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6f6e153-c56c-4b3f-8c0c-457f163ac959-operator-scripts\") pod \"nova-cell1-cd2f-account-create-update-hcj7k\" (UID: \"e6f6e153-c56c-4b3f-8c0c-457f163ac959\") " pod="openstack/nova-cell1-cd2f-account-create-update-hcj7k" Feb 19 21:22:18 crc kubenswrapper[4886]: I0219 21:22:18.143795 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6f6e153-c56c-4b3f-8c0c-457f163ac959-operator-scripts\") pod \"nova-cell1-cd2f-account-create-update-hcj7k\" (UID: \"e6f6e153-c56c-4b3f-8c0c-457f163ac959\") " pod="openstack/nova-cell1-cd2f-account-create-update-hcj7k" Feb 19 21:22:18 crc kubenswrapper[4886]: I0219 21:22:18.144142 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c79h4\" (UniqueName: \"kubernetes.io/projected/e6f6e153-c56c-4b3f-8c0c-457f163ac959-kube-api-access-c79h4\") pod \"nova-cell1-cd2f-account-create-update-hcj7k\" (UID: \"e6f6e153-c56c-4b3f-8c0c-457f163ac959\") " pod="openstack/nova-cell1-cd2f-account-create-update-hcj7k" Feb 19 21:22:18 crc kubenswrapper[4886]: I0219 21:22:18.145451 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6f6e153-c56c-4b3f-8c0c-457f163ac959-operator-scripts\") pod \"nova-cell1-cd2f-account-create-update-hcj7k\" (UID: \"e6f6e153-c56c-4b3f-8c0c-457f163ac959\") " pod="openstack/nova-cell1-cd2f-account-create-update-hcj7k" Feb 19 21:22:18 crc kubenswrapper[4886]: I0219 21:22:18.165499 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c79h4\" (UniqueName: \"kubernetes.io/projected/e6f6e153-c56c-4b3f-8c0c-457f163ac959-kube-api-access-c79h4\") pod \"nova-cell1-cd2f-account-create-update-hcj7k\" (UID: \"e6f6e153-c56c-4b3f-8c0c-457f163ac959\") " pod="openstack/nova-cell1-cd2f-account-create-update-hcj7k" Feb 19 21:22:18 crc kubenswrapper[4886]: I0219 21:22:18.179455 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-v4wd4" Feb 19 21:22:18 crc kubenswrapper[4886]: I0219 21:22:18.202126 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c948-account-create-update-hjlvn" Feb 19 21:22:18 crc kubenswrapper[4886]: I0219 21:22:18.229215 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cd2f-account-create-update-hcj7k" Feb 19 21:22:18 crc kubenswrapper[4886]: I0219 21:22:18.555444 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-m9xgr"] Feb 19 21:22:18 crc kubenswrapper[4886]: I0219 21:22:18.649442 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-001b-account-create-update-56t7m"] Feb 19 21:22:18 crc kubenswrapper[4886]: I0219 21:22:18.740731 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-x9bqh"] Feb 19 21:22:18 crc kubenswrapper[4886]: I0219 21:22:18.863602 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-001b-account-create-update-56t7m" event={"ID":"cf75e0d9-f5a5-47ab-af09-0d0321b1cacc","Type":"ContainerStarted","Data":"e8ec3863de8be75f4d9e1537d2ec201db595127500d883a294453e458eec28d7"} Feb 19 21:22:18 crc kubenswrapper[4886]: I0219 21:22:18.864632 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-x9bqh" event={"ID":"46ceffae-e399-457a-80fe-152a1a143641","Type":"ContainerStarted","Data":"1240130e422edae29b40f993c5cb217bec32a5d9a2d3a57c5ef65db76a6b1304"} Feb 19 21:22:18 crc kubenswrapper[4886]: I0219 21:22:18.876346 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-m9xgr" event={"ID":"ecb05e42-f78a-402a-9ea7-50fd859e9b29","Type":"ContainerStarted","Data":"2d634c8c798cfd9d587619479c223c93f9861d432ba317f30f3879857fc62b62"} Feb 19 21:22:18 crc kubenswrapper[4886]: I0219 21:22:18.923104 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-v4wd4"] Feb 19 21:22:18 crc kubenswrapper[4886]: W0219 21:22:18.925748 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11a70275_beb0_4c27_8e45_b8a8143c34a2.slice/crio-503855744e479fb0e54e06a5de3db1b1b8365ec78929fbcf1688fac503b5b670 WatchSource:0}: Error finding container 503855744e479fb0e54e06a5de3db1b1b8365ec78929fbcf1688fac503b5b670: Status 404 returned error can't find the container with id 503855744e479fb0e54e06a5de3db1b1b8365ec78929fbcf1688fac503b5b670 Feb 19 21:22:18 crc kubenswrapper[4886]: I0219 21:22:18.936704 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-c948-account-create-update-hjlvn"] Feb 19 21:22:18 crc kubenswrapper[4886]: I0219 21:22:18.950302 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cd2f-account-create-update-hcj7k"] Feb 19 21:22:18 crc kubenswrapper[4886]: W0219 21:22:18.951560 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23217e5b_f7df_460d_abd9_ada19eeb839a.slice/crio-8b1aac912367b8e2fa26201d5a38004a8f137a3ec633a183fbd0a1c4d6c29c82 WatchSource:0}: Error finding container 8b1aac912367b8e2fa26201d5a38004a8f137a3ec633a183fbd0a1c4d6c29c82: Status 404 returned error can't find the container with id 8b1aac912367b8e2fa26201d5a38004a8f137a3ec633a183fbd0a1c4d6c29c82 Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.462840 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6468c4987f-44r29" Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.585504 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-combined-ca-bundle\") pod \"bf0e9257-4c83-4e36-803a-5b85d9cb5e11\" (UID: \"bf0e9257-4c83-4e36-803a-5b85d9cb5e11\") " Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.585690 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-config-data\") pod \"bf0e9257-4c83-4e36-803a-5b85d9cb5e11\" (UID: \"bf0e9257-4c83-4e36-803a-5b85d9cb5e11\") " Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.585800 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-config-data-custom\") pod \"bf0e9257-4c83-4e36-803a-5b85d9cb5e11\" (UID: \"bf0e9257-4c83-4e36-803a-5b85d9cb5e11\") " Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.585968 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2xhg\" (UniqueName: \"kubernetes.io/projected/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-kube-api-access-d2xhg\") pod \"bf0e9257-4c83-4e36-803a-5b85d9cb5e11\" (UID: \"bf0e9257-4c83-4e36-803a-5b85d9cb5e11\") " Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.599166 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bf0e9257-4c83-4e36-803a-5b85d9cb5e11" (UID: "bf0e9257-4c83-4e36-803a-5b85d9cb5e11"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.612554 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-kube-api-access-d2xhg" (OuterVolumeSpecName: "kube-api-access-d2xhg") pod "bf0e9257-4c83-4e36-803a-5b85d9cb5e11" (UID: "bf0e9257-4c83-4e36-803a-5b85d9cb5e11"). InnerVolumeSpecName "kube-api-access-d2xhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.661570 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf0e9257-4c83-4e36-803a-5b85d9cb5e11" (UID: "bf0e9257-4c83-4e36-803a-5b85d9cb5e11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.692477 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-config-data" (OuterVolumeSpecName: "config-data") pod "bf0e9257-4c83-4e36-803a-5b85d9cb5e11" (UID: "bf0e9257-4c83-4e36-803a-5b85d9cb5e11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.692602 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-config-data\") pod \"bf0e9257-4c83-4e36-803a-5b85d9cb5e11\" (UID: \"bf0e9257-4c83-4e36-803a-5b85d9cb5e11\") " Feb 19 21:22:19 crc kubenswrapper[4886]: W0219 21:22:19.692754 4886 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bf0e9257-4c83-4e36-803a-5b85d9cb5e11/volumes/kubernetes.io~secret/config-data Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.692774 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-config-data" (OuterVolumeSpecName: "config-data") pod "bf0e9257-4c83-4e36-803a-5b85d9cb5e11" (UID: "bf0e9257-4c83-4e36-803a-5b85d9cb5e11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.693830 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.693858 4886 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.693874 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2xhg\" (UniqueName: \"kubernetes.io/projected/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-kube-api-access-d2xhg\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.693886 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0e9257-4c83-4e36-803a-5b85d9cb5e11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.887473 4886 generic.go:334] "Generic (PLEG): container finished" podID="bf0e9257-4c83-4e36-803a-5b85d9cb5e11" containerID="bbc3e8ba189c0315a4442b7a1e8d2fad49846dad99116c45816d751f67e9e235" exitCode=0 Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.887627 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6468c4987f-44r29" Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.887748 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6468c4987f-44r29" event={"ID":"bf0e9257-4c83-4e36-803a-5b85d9cb5e11","Type":"ContainerDied","Data":"bbc3e8ba189c0315a4442b7a1e8d2fad49846dad99116c45816d751f67e9e235"} Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.887802 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6468c4987f-44r29" event={"ID":"bf0e9257-4c83-4e36-803a-5b85d9cb5e11","Type":"ContainerDied","Data":"c6f4d4f29904490a59b41df1c07a0c164489a14f434a9ad48a3a386fc7a34908"} Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.887819 4886 scope.go:117] "RemoveContainer" containerID="bbc3e8ba189c0315a4442b7a1e8d2fad49846dad99116c45816d751f67e9e235" Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.895993 4886 generic.go:334] "Generic (PLEG): container finished" podID="ecb05e42-f78a-402a-9ea7-50fd859e9b29" containerID="1cbcd4b8bd1b338c178e47c76cbf8f4f69c2fe4cfca926a4e8a5e7b3d2882c7d" exitCode=0 Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.896050 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-m9xgr" event={"ID":"ecb05e42-f78a-402a-9ea7-50fd859e9b29","Type":"ContainerDied","Data":"1cbcd4b8bd1b338c178e47c76cbf8f4f69c2fe4cfca926a4e8a5e7b3d2882c7d"} Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.905783 4886 generic.go:334] "Generic (PLEG): container finished" podID="23217e5b-f7df-460d-abd9-ada19eeb839a" containerID="18a62e68cec3f9cb438b5d66671e334c17d127a4d18f2920820b41c6370d8cdb" exitCode=0 Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.905842 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c948-account-create-update-hjlvn" event={"ID":"23217e5b-f7df-460d-abd9-ada19eeb839a","Type":"ContainerDied","Data":"18a62e68cec3f9cb438b5d66671e334c17d127a4d18f2920820b41c6370d8cdb"} Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.905868 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c948-account-create-update-hjlvn" event={"ID":"23217e5b-f7df-460d-abd9-ada19eeb839a","Type":"ContainerStarted","Data":"8b1aac912367b8e2fa26201d5a38004a8f137a3ec633a183fbd0a1c4d6c29c82"} Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.911505 4886 generic.go:334] "Generic (PLEG): container finished" podID="cf75e0d9-f5a5-47ab-af09-0d0321b1cacc" containerID="f2e2c96fe32633879434267721c95e6e9e266c966ac8aaf6af01d57ba935d6f4" exitCode=0 Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.911710 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-001b-account-create-update-56t7m" event={"ID":"cf75e0d9-f5a5-47ab-af09-0d0321b1cacc","Type":"ContainerDied","Data":"f2e2c96fe32633879434267721c95e6e9e266c966ac8aaf6af01d57ba935d6f4"} Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.916724 4886 generic.go:334] "Generic (PLEG): container finished" podID="e6f6e153-c56c-4b3f-8c0c-457f163ac959" containerID="08cfb6081d81afa39dd546464d2c4bbb5cdb2860da1857f50ad0174c862880de" exitCode=0 Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.916796 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cd2f-account-create-update-hcj7k" event={"ID":"e6f6e153-c56c-4b3f-8c0c-457f163ac959","Type":"ContainerDied","Data":"08cfb6081d81afa39dd546464d2c4bbb5cdb2860da1857f50ad0174c862880de"} Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.916823 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cd2f-account-create-update-hcj7k" event={"ID":"e6f6e153-c56c-4b3f-8c0c-457f163ac959","Type":"ContainerStarted","Data":"fa1e1eeb35566bcc121aadad38a24d2a4810963e8b2807b171bfc0f540ac15af"} Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.923834 4886 generic.go:334] "Generic (PLEG): container finished" podID="11a70275-beb0-4c27-8e45-b8a8143c34a2" containerID="99ce8f3f4f40db4ba0fbe5e35bd58722d8c97e35dab2e5e129897bceea565cc1" exitCode=0 Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.923993 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-v4wd4" event={"ID":"11a70275-beb0-4c27-8e45-b8a8143c34a2","Type":"ContainerDied","Data":"99ce8f3f4f40db4ba0fbe5e35bd58722d8c97e35dab2e5e129897bceea565cc1"} Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.924052 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-v4wd4" event={"ID":"11a70275-beb0-4c27-8e45-b8a8143c34a2","Type":"ContainerStarted","Data":"503855744e479fb0e54e06a5de3db1b1b8365ec78929fbcf1688fac503b5b670"} Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.935655 4886 generic.go:334] "Generic (PLEG): container finished" podID="46ceffae-e399-457a-80fe-152a1a143641" containerID="28427be5b5061da3c3044a6d62cd6d591a82f89ddabc6cddbb075469c3b930b9" exitCode=0 Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.935795 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.935804 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 21:22:19 crc kubenswrapper[4886]: I0219 21:22:19.936724 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-x9bqh" event={"ID":"46ceffae-e399-457a-80fe-152a1a143641","Type":"ContainerDied","Data":"28427be5b5061da3c3044a6d62cd6d591a82f89ddabc6cddbb075469c3b930b9"} Feb 19 21:22:20 crc kubenswrapper[4886]: I0219 21:22:20.130195 4886 scope.go:117] "RemoveContainer" containerID="bbc3e8ba189c0315a4442b7a1e8d2fad49846dad99116c45816d751f67e9e235" Feb 19 21:22:20 crc kubenswrapper[4886]: E0219 21:22:20.130679 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbc3e8ba189c0315a4442b7a1e8d2fad49846dad99116c45816d751f67e9e235\": container with ID starting with bbc3e8ba189c0315a4442b7a1e8d2fad49846dad99116c45816d751f67e9e235 not found: ID does not exist" containerID="bbc3e8ba189c0315a4442b7a1e8d2fad49846dad99116c45816d751f67e9e235" Feb 19 21:22:20 crc kubenswrapper[4886]: I0219 21:22:20.130715 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbc3e8ba189c0315a4442b7a1e8d2fad49846dad99116c45816d751f67e9e235"} err="failed to get container status \"bbc3e8ba189c0315a4442b7a1e8d2fad49846dad99116c45816d751f67e9e235\": rpc error: code = NotFound desc = could not find container \"bbc3e8ba189c0315a4442b7a1e8d2fad49846dad99116c45816d751f67e9e235\": container with ID starting with bbc3e8ba189c0315a4442b7a1e8d2fad49846dad99116c45816d751f67e9e235 not found: ID does not exist" Feb 19 21:22:20 crc kubenswrapper[4886]: I0219 21:22:20.134544 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6468c4987f-44r29"] Feb 19 21:22:20 crc kubenswrapper[4886]: I0219 21:22:20.145189 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-6468c4987f-44r29"] Feb 19 21:22:20 crc kubenswrapper[4886]: I0219 21:22:20.625377 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf0e9257-4c83-4e36-803a-5b85d9cb5e11" path="/var/lib/kubelet/pods/bf0e9257-4c83-4e36-803a-5b85d9cb5e11/volumes" Feb 19 21:22:21 crc kubenswrapper[4886]: I0219 21:22:21.631559 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cd2f-account-create-update-hcj7k" Feb 19 21:22:21 crc kubenswrapper[4886]: I0219 21:22:21.659596 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c79h4\" (UniqueName: \"kubernetes.io/projected/e6f6e153-c56c-4b3f-8c0c-457f163ac959-kube-api-access-c79h4\") pod \"e6f6e153-c56c-4b3f-8c0c-457f163ac959\" (UID: \"e6f6e153-c56c-4b3f-8c0c-457f163ac959\") " Feb 19 21:22:21 crc kubenswrapper[4886]: I0219 21:22:21.659690 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6f6e153-c56c-4b3f-8c0c-457f163ac959-operator-scripts\") pod \"e6f6e153-c56c-4b3f-8c0c-457f163ac959\" (UID: \"e6f6e153-c56c-4b3f-8c0c-457f163ac959\") " Feb 19 21:22:21 crc kubenswrapper[4886]: I0219 21:22:21.661004 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6f6e153-c56c-4b3f-8c0c-457f163ac959-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e6f6e153-c56c-4b3f-8c0c-457f163ac959" (UID: "e6f6e153-c56c-4b3f-8c0c-457f163ac959"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:22:21 crc kubenswrapper[4886]: I0219 21:22:21.681056 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6f6e153-c56c-4b3f-8c0c-457f163ac959-kube-api-access-c79h4" (OuterVolumeSpecName: "kube-api-access-c79h4") pod "e6f6e153-c56c-4b3f-8c0c-457f163ac959" (UID: "e6f6e153-c56c-4b3f-8c0c-457f163ac959"). InnerVolumeSpecName "kube-api-access-c79h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:22:21 crc kubenswrapper[4886]: I0219 21:22:21.762283 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6f6e153-c56c-4b3f-8c0c-457f163ac959-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:21 crc kubenswrapper[4886]: I0219 21:22:21.762327 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c79h4\" (UniqueName: \"kubernetes.io/projected/e6f6e153-c56c-4b3f-8c0c-457f163ac959-kube-api-access-c79h4\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:21 crc kubenswrapper[4886]: I0219 21:22:21.910202 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-m9xgr" Feb 19 21:22:21 crc kubenswrapper[4886]: I0219 21:22:21.918964 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-x9bqh" Feb 19 21:22:21 crc kubenswrapper[4886]: I0219 21:22:21.927064 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c948-account-create-update-hjlvn" Feb 19 21:22:21 crc kubenswrapper[4886]: I0219 21:22:21.972757 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 21:22:21 crc kubenswrapper[4886]: I0219 21:22:21.973632 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.012056 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-001b-account-create-update-56t7m" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.039244 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.041764 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.042370 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cd2f-account-create-update-hcj7k" event={"ID":"e6f6e153-c56c-4b3f-8c0c-457f163ac959","Type":"ContainerDied","Data":"fa1e1eeb35566bcc121aadad38a24d2a4810963e8b2807b171bfc0f540ac15af"} Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.042408 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa1e1eeb35566bcc121aadad38a24d2a4810963e8b2807b171bfc0f540ac15af" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.042487 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cd2f-account-create-update-hcj7k" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.045037 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-v4wd4" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.052336 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-v4wd4" event={"ID":"11a70275-beb0-4c27-8e45-b8a8143c34a2","Type":"ContainerDied","Data":"503855744e479fb0e54e06a5de3db1b1b8365ec78929fbcf1688fac503b5b670"} Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.052551 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="503855744e479fb0e54e06a5de3db1b1b8365ec78929fbcf1688fac503b5b670" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.062979 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-x9bqh" event={"ID":"46ceffae-e399-457a-80fe-152a1a143641","Type":"ContainerDied","Data":"1240130e422edae29b40f993c5cb217bec32a5d9a2d3a57c5ef65db76a6b1304"} Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.063020 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1240130e422edae29b40f993c5cb217bec32a5d9a2d3a57c5ef65db76a6b1304" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.064239 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-x9bqh" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.077148 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-m9xgr" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.079033 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-m9xgr" event={"ID":"ecb05e42-f78a-402a-9ea7-50fd859e9b29","Type":"ContainerDied","Data":"2d634c8c798cfd9d587619479c223c93f9861d432ba317f30f3879857fc62b62"} Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.079217 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d634c8c798cfd9d587619479c223c93f9861d432ba317f30f3879857fc62b62" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.082420 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dphh\" (UniqueName: \"kubernetes.io/projected/ecb05e42-f78a-402a-9ea7-50fd859e9b29-kube-api-access-5dphh\") pod \"ecb05e42-f78a-402a-9ea7-50fd859e9b29\" (UID: \"ecb05e42-f78a-402a-9ea7-50fd859e9b29\") " Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.082514 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46ceffae-e399-457a-80fe-152a1a143641-operator-scripts\") pod \"46ceffae-e399-457a-80fe-152a1a143641\" (UID: \"46ceffae-e399-457a-80fe-152a1a143641\") " Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.082619 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zncj4\" (UniqueName: \"kubernetes.io/projected/23217e5b-f7df-460d-abd9-ada19eeb839a-kube-api-access-zncj4\") pod \"23217e5b-f7df-460d-abd9-ada19eeb839a\" (UID: \"23217e5b-f7df-460d-abd9-ada19eeb839a\") " Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.082690 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecb05e42-f78a-402a-9ea7-50fd859e9b29-operator-scripts\") pod \"ecb05e42-f78a-402a-9ea7-50fd859e9b29\" (UID: \"ecb05e42-f78a-402a-9ea7-50fd859e9b29\") " Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.082865 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23217e5b-f7df-460d-abd9-ada19eeb839a-operator-scripts\") pod \"23217e5b-f7df-460d-abd9-ada19eeb839a\" (UID: \"23217e5b-f7df-460d-abd9-ada19eeb839a\") " Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.082913 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmkhq\" (UniqueName: \"kubernetes.io/projected/46ceffae-e399-457a-80fe-152a1a143641-kube-api-access-hmkhq\") pod \"46ceffae-e399-457a-80fe-152a1a143641\" (UID: \"46ceffae-e399-457a-80fe-152a1a143641\") " Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.084497 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46ceffae-e399-457a-80fe-152a1a143641-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "46ceffae-e399-457a-80fe-152a1a143641" (UID: "46ceffae-e399-457a-80fe-152a1a143641"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.084795 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecb05e42-f78a-402a-9ea7-50fd859e9b29-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ecb05e42-f78a-402a-9ea7-50fd859e9b29" (UID: "ecb05e42-f78a-402a-9ea7-50fd859e9b29"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.085173 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23217e5b-f7df-460d-abd9-ada19eeb839a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "23217e5b-f7df-460d-abd9-ada19eeb839a" (UID: "23217e5b-f7df-460d-abd9-ada19eeb839a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.110461 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23217e5b-f7df-460d-abd9-ada19eeb839a-kube-api-access-zncj4" (OuterVolumeSpecName: "kube-api-access-zncj4") pod "23217e5b-f7df-460d-abd9-ada19eeb839a" (UID: "23217e5b-f7df-460d-abd9-ada19eeb839a"). InnerVolumeSpecName "kube-api-access-zncj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.110681 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46ceffae-e399-457a-80fe-152a1a143641-kube-api-access-hmkhq" (OuterVolumeSpecName: "kube-api-access-hmkhq") pod "46ceffae-e399-457a-80fe-152a1a143641" (UID: "46ceffae-e399-457a-80fe-152a1a143641"). InnerVolumeSpecName "kube-api-access-hmkhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.148708 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c948-account-create-update-hjlvn" event={"ID":"23217e5b-f7df-460d-abd9-ada19eeb839a","Type":"ContainerDied","Data":"8b1aac912367b8e2fa26201d5a38004a8f137a3ec633a183fbd0a1c4d6c29c82"} Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.148761 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b1aac912367b8e2fa26201d5a38004a8f137a3ec633a183fbd0a1c4d6c29c82" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.148874 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c948-account-create-update-hjlvn" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.163499 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecb05e42-f78a-402a-9ea7-50fd859e9b29-kube-api-access-5dphh" (OuterVolumeSpecName: "kube-api-access-5dphh") pod "ecb05e42-f78a-402a-9ea7-50fd859e9b29" (UID: "ecb05e42-f78a-402a-9ea7-50fd859e9b29"). InnerVolumeSpecName "kube-api-access-5dphh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.165792 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-001b-account-create-update-56t7m" event={"ID":"cf75e0d9-f5a5-47ab-af09-0d0321b1cacc","Type":"ContainerDied","Data":"e8ec3863de8be75f4d9e1537d2ec201db595127500d883a294453e458eec28d7"} Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.165842 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8ec3863de8be75f4d9e1537d2ec201db595127500d883a294453e458eec28d7" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.165927 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-001b-account-create-update-56t7m" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.192297 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11a70275-beb0-4c27-8e45-b8a8143c34a2-operator-scripts\") pod \"11a70275-beb0-4c27-8e45-b8a8143c34a2\" (UID: \"11a70275-beb0-4c27-8e45-b8a8143c34a2\") " Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.192526 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96dpj\" (UniqueName: \"kubernetes.io/projected/cf75e0d9-f5a5-47ab-af09-0d0321b1cacc-kube-api-access-96dpj\") pod \"cf75e0d9-f5a5-47ab-af09-0d0321b1cacc\" (UID: \"cf75e0d9-f5a5-47ab-af09-0d0321b1cacc\") " Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.192608 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf75e0d9-f5a5-47ab-af09-0d0321b1cacc-operator-scripts\") pod \"cf75e0d9-f5a5-47ab-af09-0d0321b1cacc\" (UID: \"cf75e0d9-f5a5-47ab-af09-0d0321b1cacc\") " Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.192643 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mlf6\" (UniqueName: \"kubernetes.io/projected/11a70275-beb0-4c27-8e45-b8a8143c34a2-kube-api-access-2mlf6\") pod \"11a70275-beb0-4c27-8e45-b8a8143c34a2\" (UID: \"11a70275-beb0-4c27-8e45-b8a8143c34a2\") " Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.193717 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zncj4\" (UniqueName: \"kubernetes.io/projected/23217e5b-f7df-460d-abd9-ada19eeb839a-kube-api-access-zncj4\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.193737 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecb05e42-f78a-402a-9ea7-50fd859e9b29-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.193746 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23217e5b-f7df-460d-abd9-ada19eeb839a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.193761 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmkhq\" (UniqueName: \"kubernetes.io/projected/46ceffae-e399-457a-80fe-152a1a143641-kube-api-access-hmkhq\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.193771 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dphh\" (UniqueName: \"kubernetes.io/projected/ecb05e42-f78a-402a-9ea7-50fd859e9b29-kube-api-access-5dphh\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.193780 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46ceffae-e399-457a-80fe-152a1a143641-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.196919 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf75e0d9-f5a5-47ab-af09-0d0321b1cacc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf75e0d9-f5a5-47ab-af09-0d0321b1cacc" (UID: "cf75e0d9-f5a5-47ab-af09-0d0321b1cacc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.196954 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11a70275-beb0-4c27-8e45-b8a8143c34a2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "11a70275-beb0-4c27-8e45-b8a8143c34a2" (UID: "11a70275-beb0-4c27-8e45-b8a8143c34a2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.197135 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf75e0d9-f5a5-47ab-af09-0d0321b1cacc-kube-api-access-96dpj" (OuterVolumeSpecName: "kube-api-access-96dpj") pod "cf75e0d9-f5a5-47ab-af09-0d0321b1cacc" (UID: "cf75e0d9-f5a5-47ab-af09-0d0321b1cacc"). InnerVolumeSpecName "kube-api-access-96dpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.209169 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11a70275-beb0-4c27-8e45-b8a8143c34a2-kube-api-access-2mlf6" (OuterVolumeSpecName: "kube-api-access-2mlf6") pod "11a70275-beb0-4c27-8e45-b8a8143c34a2" (UID: "11a70275-beb0-4c27-8e45-b8a8143c34a2"). InnerVolumeSpecName "kube-api-access-2mlf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.266777 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.266869 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.270656 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.295966 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11a70275-beb0-4c27-8e45-b8a8143c34a2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.296004 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96dpj\" (UniqueName: \"kubernetes.io/projected/cf75e0d9-f5a5-47ab-af09-0d0321b1cacc-kube-api-access-96dpj\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.296015 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf75e0d9-f5a5-47ab-af09-0d0321b1cacc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:22 crc kubenswrapper[4886]: I0219 21:22:22.296025 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mlf6\" (UniqueName: \"kubernetes.io/projected/11a70275-beb0-4c27-8e45-b8a8143c34a2-kube-api-access-2mlf6\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:23 crc kubenswrapper[4886]: I0219 21:22:23.181476 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-v4wd4" Feb 19 21:22:23 crc kubenswrapper[4886]: I0219 21:22:23.181788 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 21:22:23 crc kubenswrapper[4886]: I0219 21:22:23.181810 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.008557 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.035425 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b480733-19ba-4138-a626-6f68d33a3c61-config-data\") pod \"3b480733-19ba-4138-a626-6f68d33a3c61\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.035515 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b480733-19ba-4138-a626-6f68d33a3c61-log-httpd\") pod \"3b480733-19ba-4138-a626-6f68d33a3c61\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.035558 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9zv8\" (UniqueName: \"kubernetes.io/projected/3b480733-19ba-4138-a626-6f68d33a3c61-kube-api-access-l9zv8\") pod \"3b480733-19ba-4138-a626-6f68d33a3c61\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.035611 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b480733-19ba-4138-a626-6f68d33a3c61-combined-ca-bundle\") pod \"3b480733-19ba-4138-a626-6f68d33a3c61\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.035750 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b480733-19ba-4138-a626-6f68d33a3c61-run-httpd\") pod \"3b480733-19ba-4138-a626-6f68d33a3c61\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.035830 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b480733-19ba-4138-a626-6f68d33a3c61-scripts\") pod \"3b480733-19ba-4138-a626-6f68d33a3c61\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.035866 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b480733-19ba-4138-a626-6f68d33a3c61-sg-core-conf-yaml\") pod \"3b480733-19ba-4138-a626-6f68d33a3c61\" (UID: \"3b480733-19ba-4138-a626-6f68d33a3c61\") " Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.037767 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b480733-19ba-4138-a626-6f68d33a3c61-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3b480733-19ba-4138-a626-6f68d33a3c61" (UID: "3b480733-19ba-4138-a626-6f68d33a3c61"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.042739 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b480733-19ba-4138-a626-6f68d33a3c61-kube-api-access-l9zv8" (OuterVolumeSpecName: "kube-api-access-l9zv8") pod "3b480733-19ba-4138-a626-6f68d33a3c61" (UID: "3b480733-19ba-4138-a626-6f68d33a3c61"). InnerVolumeSpecName "kube-api-access-l9zv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.043308 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b480733-19ba-4138-a626-6f68d33a3c61-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3b480733-19ba-4138-a626-6f68d33a3c61" (UID: "3b480733-19ba-4138-a626-6f68d33a3c61"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.044100 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b480733-19ba-4138-a626-6f68d33a3c61-scripts" (OuterVolumeSpecName: "scripts") pod "3b480733-19ba-4138-a626-6f68d33a3c61" (UID: "3b480733-19ba-4138-a626-6f68d33a3c61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.138657 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b480733-19ba-4138-a626-6f68d33a3c61-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.138695 4886 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b480733-19ba-4138-a626-6f68d33a3c61-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.138707 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9zv8\" (UniqueName: \"kubernetes.io/projected/3b480733-19ba-4138-a626-6f68d33a3c61-kube-api-access-l9zv8\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.138721 4886 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b480733-19ba-4138-a626-6f68d33a3c61-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.156355 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b480733-19ba-4138-a626-6f68d33a3c61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b480733-19ba-4138-a626-6f68d33a3c61" (UID: "3b480733-19ba-4138-a626-6f68d33a3c61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.179723 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b480733-19ba-4138-a626-6f68d33a3c61-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3b480733-19ba-4138-a626-6f68d33a3c61" (UID: "3b480733-19ba-4138-a626-6f68d33a3c61"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.196585 4886 generic.go:334] "Generic (PLEG): container finished" podID="3b480733-19ba-4138-a626-6f68d33a3c61" containerID="b3621c57a34b6c68a3156c124b26b634ed788971513a23e5106ecfbb5038930f" exitCode=0 Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.198232 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.199041 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b480733-19ba-4138-a626-6f68d33a3c61","Type":"ContainerDied","Data":"b3621c57a34b6c68a3156c124b26b634ed788971513a23e5106ecfbb5038930f"} Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.199084 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b480733-19ba-4138-a626-6f68d33a3c61","Type":"ContainerDied","Data":"2fb66cce5ce23d14e4699d163b8e7754080f04e624c334ede24fae6be16e7cd9"} Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.199111 4886 scope.go:117] "RemoveContainer" containerID="59d3665e57fe2f72500b9fd5368eacb7f86b05e83b1f81aae8da4ade5deef785" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.229654 4886 scope.go:117] "RemoveContainer" containerID="c13f0c3b251233e24a0acb5d125b2c7954befbba53cd7aa10a33fb0cbdc1683e" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.241608 4886 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b480733-19ba-4138-a626-6f68d33a3c61-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.241660 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b480733-19ba-4138-a626-6f68d33a3c61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.250530 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b480733-19ba-4138-a626-6f68d33a3c61-config-data" (OuterVolumeSpecName: "config-data") pod "3b480733-19ba-4138-a626-6f68d33a3c61" (UID: "3b480733-19ba-4138-a626-6f68d33a3c61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.259343 4886 scope.go:117] "RemoveContainer" containerID="ad1d26e3b0031aaa48002f3a14642503484fef4bc145c5f22f8f970750d576b0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.286789 4886 scope.go:117] "RemoveContainer" containerID="b3621c57a34b6c68a3156c124b26b634ed788971513a23e5106ecfbb5038930f" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.318011 4886 scope.go:117] "RemoveContainer" containerID="59d3665e57fe2f72500b9fd5368eacb7f86b05e83b1f81aae8da4ade5deef785" Feb 19 21:22:24 crc kubenswrapper[4886]: E0219 21:22:24.320369 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59d3665e57fe2f72500b9fd5368eacb7f86b05e83b1f81aae8da4ade5deef785\": container with ID starting with 59d3665e57fe2f72500b9fd5368eacb7f86b05e83b1f81aae8da4ade5deef785 not found: ID does not exist" containerID="59d3665e57fe2f72500b9fd5368eacb7f86b05e83b1f81aae8da4ade5deef785" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.320405 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59d3665e57fe2f72500b9fd5368eacb7f86b05e83b1f81aae8da4ade5deef785"} err="failed to get container status \"59d3665e57fe2f72500b9fd5368eacb7f86b05e83b1f81aae8da4ade5deef785\": rpc error: code = NotFound desc = could not find container \"59d3665e57fe2f72500b9fd5368eacb7f86b05e83b1f81aae8da4ade5deef785\": container with ID starting with 59d3665e57fe2f72500b9fd5368eacb7f86b05e83b1f81aae8da4ade5deef785 not found: ID does not exist" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.320444 4886 scope.go:117] "RemoveContainer" containerID="c13f0c3b251233e24a0acb5d125b2c7954befbba53cd7aa10a33fb0cbdc1683e" Feb 19 21:22:24 crc kubenswrapper[4886]: E0219 21:22:24.320742 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c13f0c3b251233e24a0acb5d125b2c7954befbba53cd7aa10a33fb0cbdc1683e\": container with ID starting with c13f0c3b251233e24a0acb5d125b2c7954befbba53cd7aa10a33fb0cbdc1683e not found: ID does not exist" containerID="c13f0c3b251233e24a0acb5d125b2c7954befbba53cd7aa10a33fb0cbdc1683e" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.320766 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c13f0c3b251233e24a0acb5d125b2c7954befbba53cd7aa10a33fb0cbdc1683e"} err="failed to get container status \"c13f0c3b251233e24a0acb5d125b2c7954befbba53cd7aa10a33fb0cbdc1683e\": rpc error: code = NotFound desc = could not find container \"c13f0c3b251233e24a0acb5d125b2c7954befbba53cd7aa10a33fb0cbdc1683e\": container with ID starting with c13f0c3b251233e24a0acb5d125b2c7954befbba53cd7aa10a33fb0cbdc1683e not found: ID does not exist" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.320779 4886 scope.go:117] "RemoveContainer" containerID="ad1d26e3b0031aaa48002f3a14642503484fef4bc145c5f22f8f970750d576b0" Feb 19 21:22:24 crc kubenswrapper[4886]: E0219 21:22:24.321599 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad1d26e3b0031aaa48002f3a14642503484fef4bc145c5f22f8f970750d576b0\": container with ID starting with ad1d26e3b0031aaa48002f3a14642503484fef4bc145c5f22f8f970750d576b0 not found: ID does not exist" containerID="ad1d26e3b0031aaa48002f3a14642503484fef4bc145c5f22f8f970750d576b0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.321616 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad1d26e3b0031aaa48002f3a14642503484fef4bc145c5f22f8f970750d576b0"} err="failed to get container status \"ad1d26e3b0031aaa48002f3a14642503484fef4bc145c5f22f8f970750d576b0\": rpc error: code = NotFound desc = could not find container \"ad1d26e3b0031aaa48002f3a14642503484fef4bc145c5f22f8f970750d576b0\": container with ID starting with ad1d26e3b0031aaa48002f3a14642503484fef4bc145c5f22f8f970750d576b0 not found: ID does not exist" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.321627 4886 scope.go:117] "RemoveContainer" containerID="b3621c57a34b6c68a3156c124b26b634ed788971513a23e5106ecfbb5038930f" Feb 19 21:22:24 crc kubenswrapper[4886]: E0219 21:22:24.321827 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3621c57a34b6c68a3156c124b26b634ed788971513a23e5106ecfbb5038930f\": container with ID starting with b3621c57a34b6c68a3156c124b26b634ed788971513a23e5106ecfbb5038930f not found: ID does not exist" containerID="b3621c57a34b6c68a3156c124b26b634ed788971513a23e5106ecfbb5038930f" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.321840 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3621c57a34b6c68a3156c124b26b634ed788971513a23e5106ecfbb5038930f"} err="failed to get container status \"b3621c57a34b6c68a3156c124b26b634ed788971513a23e5106ecfbb5038930f\": rpc error: code = NotFound desc = could not find container \"b3621c57a34b6c68a3156c124b26b634ed788971513a23e5106ecfbb5038930f\": container with ID starting with b3621c57a34b6c68a3156c124b26b634ed788971513a23e5106ecfbb5038930f not found: ID does not exist" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.344342 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b480733-19ba-4138-a626-6f68d33a3c61-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.534950 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.547950 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.563879 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:22:24 crc kubenswrapper[4886]: E0219 21:22:24.565596 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b480733-19ba-4138-a626-6f68d33a3c61" containerName="proxy-httpd" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.565719 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b480733-19ba-4138-a626-6f68d33a3c61" containerName="proxy-httpd" Feb 19 21:22:24 crc kubenswrapper[4886]: E0219 21:22:24.565789 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23217e5b-f7df-460d-abd9-ada19eeb839a" containerName="mariadb-account-create-update" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.565872 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="23217e5b-f7df-460d-abd9-ada19eeb839a" containerName="mariadb-account-create-update" Feb 19 21:22:24 crc kubenswrapper[4886]: E0219 21:22:24.565959 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b480733-19ba-4138-a626-6f68d33a3c61" containerName="ceilometer-central-agent" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.566048 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b480733-19ba-4138-a626-6f68d33a3c61" containerName="ceilometer-central-agent" Feb 19 21:22:24 crc kubenswrapper[4886]: E0219 21:22:24.566135 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf75e0d9-f5a5-47ab-af09-0d0321b1cacc" containerName="mariadb-account-create-update" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.566219 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf75e0d9-f5a5-47ab-af09-0d0321b1cacc" containerName="mariadb-account-create-update" Feb 19 21:22:24 crc kubenswrapper[4886]: E0219 21:22:24.566337 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ceffae-e399-457a-80fe-152a1a143641" containerName="mariadb-database-create" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.566420 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ceffae-e399-457a-80fe-152a1a143641" containerName="mariadb-database-create" Feb 19 21:22:24 crc kubenswrapper[4886]: E0219 21:22:24.566502 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf0e9257-4c83-4e36-803a-5b85d9cb5e11" containerName="heat-engine" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.566727 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf0e9257-4c83-4e36-803a-5b85d9cb5e11" containerName="heat-engine" Feb 19 21:22:24 crc kubenswrapper[4886]: E0219 21:22:24.566902 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6f6e153-c56c-4b3f-8c0c-457f163ac959" containerName="mariadb-account-create-update" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.567091 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6f6e153-c56c-4b3f-8c0c-457f163ac959" containerName="mariadb-account-create-update" Feb 19 21:22:24 crc kubenswrapper[4886]: E0219 21:22:24.567184 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11a70275-beb0-4c27-8e45-b8a8143c34a2" containerName="mariadb-database-create" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.567285 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="11a70275-beb0-4c27-8e45-b8a8143c34a2" containerName="mariadb-database-create" Feb 19 21:22:24 crc kubenswrapper[4886]: E0219 21:22:24.567378 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b480733-19ba-4138-a626-6f68d33a3c61" containerName="sg-core" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.567458 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b480733-19ba-4138-a626-6f68d33a3c61" containerName="sg-core" Feb 19 21:22:24 crc kubenswrapper[4886]: E0219 21:22:24.567541 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb05e42-f78a-402a-9ea7-50fd859e9b29" containerName="mariadb-database-create" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.567619 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb05e42-f78a-402a-9ea7-50fd859e9b29" containerName="mariadb-database-create" Feb 19 21:22:24 crc kubenswrapper[4886]: E0219 21:22:24.567704 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b480733-19ba-4138-a626-6f68d33a3c61" containerName="ceilometer-notification-agent" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.567783 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b480733-19ba-4138-a626-6f68d33a3c61" containerName="ceilometer-notification-agent" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.568446 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b480733-19ba-4138-a626-6f68d33a3c61" containerName="sg-core" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.568548 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="46ceffae-e399-457a-80fe-152a1a143641" containerName="mariadb-database-create" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.568630 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf0e9257-4c83-4e36-803a-5b85d9cb5e11" containerName="heat-engine" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.568714 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b480733-19ba-4138-a626-6f68d33a3c61" containerName="ceilometer-central-agent" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.568843 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf75e0d9-f5a5-47ab-af09-0d0321b1cacc" containerName="mariadb-account-create-update" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.568937 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="23217e5b-f7df-460d-abd9-ada19eeb839a" containerName="mariadb-account-create-update" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.569098 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb05e42-f78a-402a-9ea7-50fd859e9b29" containerName="mariadb-database-create" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.569210 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="11a70275-beb0-4c27-8e45-b8a8143c34a2" containerName="mariadb-database-create" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.569316 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b480733-19ba-4138-a626-6f68d33a3c61" containerName="proxy-httpd" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.569381 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b480733-19ba-4138-a626-6f68d33a3c61" containerName="ceilometer-notification-agent" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.569440 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6f6e153-c56c-4b3f-8c0c-457f163ac959" containerName="mariadb-account-create-update" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.571489 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.580594 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.580812 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.592133 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.650104 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b480733-19ba-4138-a626-6f68d33a3c61" path="/var/lib/kubelet/pods/3b480733-19ba-4138-a626-6f68d33a3c61/volumes" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.653768 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/323e6ff6-e671-4d94-a6e2-8769137b06c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " pod="openstack/ceilometer-0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.653811 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/323e6ff6-e671-4d94-a6e2-8769137b06c4-run-httpd\") pod \"ceilometer-0\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " pod="openstack/ceilometer-0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.653878 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323e6ff6-e671-4d94-a6e2-8769137b06c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " pod="openstack/ceilometer-0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.653909 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323e6ff6-e671-4d94-a6e2-8769137b06c4-config-data\") pod \"ceilometer-0\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " pod="openstack/ceilometer-0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.654032 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs72d\" (UniqueName: \"kubernetes.io/projected/323e6ff6-e671-4d94-a6e2-8769137b06c4-kube-api-access-fs72d\") pod \"ceilometer-0\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " pod="openstack/ceilometer-0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.654148 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/323e6ff6-e671-4d94-a6e2-8769137b06c4-scripts\") pod \"ceilometer-0\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " pod="openstack/ceilometer-0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.654221 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/323e6ff6-e671-4d94-a6e2-8769137b06c4-log-httpd\") pod \"ceilometer-0\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " pod="openstack/ceilometer-0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.756414 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/323e6ff6-e671-4d94-a6e2-8769137b06c4-scripts\") pod \"ceilometer-0\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " pod="openstack/ceilometer-0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.756489 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/323e6ff6-e671-4d94-a6e2-8769137b06c4-log-httpd\") pod \"ceilometer-0\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " pod="openstack/ceilometer-0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.756566 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/323e6ff6-e671-4d94-a6e2-8769137b06c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " pod="openstack/ceilometer-0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.756589 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/323e6ff6-e671-4d94-a6e2-8769137b06c4-run-httpd\") pod \"ceilometer-0\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " pod="openstack/ceilometer-0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.756623 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323e6ff6-e671-4d94-a6e2-8769137b06c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " pod="openstack/ceilometer-0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.756647 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323e6ff6-e671-4d94-a6e2-8769137b06c4-config-data\") pod \"ceilometer-0\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " pod="openstack/ceilometer-0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.757244 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/323e6ff6-e671-4d94-a6e2-8769137b06c4-log-httpd\") pod \"ceilometer-0\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " pod="openstack/ceilometer-0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.757296 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs72d\" (UniqueName: \"kubernetes.io/projected/323e6ff6-e671-4d94-a6e2-8769137b06c4-kube-api-access-fs72d\") pod \"ceilometer-0\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " pod="openstack/ceilometer-0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.757875 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/323e6ff6-e671-4d94-a6e2-8769137b06c4-run-httpd\") pod \"ceilometer-0\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " pod="openstack/ceilometer-0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.761149 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/323e6ff6-e671-4d94-a6e2-8769137b06c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " pod="openstack/ceilometer-0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.761822 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323e6ff6-e671-4d94-a6e2-8769137b06c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " pod="openstack/ceilometer-0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.762976 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/323e6ff6-e671-4d94-a6e2-8769137b06c4-scripts\") pod \"ceilometer-0\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " pod="openstack/ceilometer-0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.775084 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323e6ff6-e671-4d94-a6e2-8769137b06c4-config-data\") pod \"ceilometer-0\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " pod="openstack/ceilometer-0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.777517 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs72d\" (UniqueName: \"kubernetes.io/projected/323e6ff6-e671-4d94-a6e2-8769137b06c4-kube-api-access-fs72d\") pod \"ceilometer-0\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " pod="openstack/ceilometer-0" Feb 19 21:22:24 crc kubenswrapper[4886]: I0219 21:22:24.952413 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:22:25 crc kubenswrapper[4886]: I0219 21:22:25.418004 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 21:22:25 crc kubenswrapper[4886]: I0219 21:22:25.418446 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 21:22:25 crc kubenswrapper[4886]: W0219 21:22:25.467103 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod323e6ff6_e671_4d94_a6e2_8769137b06c4.slice/crio-40a41b6a0d67a3d113b63dc1ed37c3ec407872f0f1dc2330250328af4099017c WatchSource:0}: Error finding container 40a41b6a0d67a3d113b63dc1ed37c3ec407872f0f1dc2330250328af4099017c: Status 404 returned error can't find the container with id 40a41b6a0d67a3d113b63dc1ed37c3ec407872f0f1dc2330250328af4099017c Feb 19 21:22:25 crc kubenswrapper[4886]: I0219 21:22:25.477582 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:22:25 crc kubenswrapper[4886]: I0219 21:22:25.635668 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 21:22:26 crc kubenswrapper[4886]: I0219 21:22:26.246223 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"323e6ff6-e671-4d94-a6e2-8769137b06c4","Type":"ContainerStarted","Data":"40a41b6a0d67a3d113b63dc1ed37c3ec407872f0f1dc2330250328af4099017c"} Feb 19 21:22:27 crc kubenswrapper[4886]: I0219 21:22:27.255761 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"323e6ff6-e671-4d94-a6e2-8769137b06c4","Type":"ContainerStarted","Data":"797d433f03d66f9bae1b388b862415480ce29a01b42eb5ea80a46308fe003120"} Feb 19 21:22:27 crc kubenswrapper[4886]: I0219 21:22:27.256291 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"323e6ff6-e671-4d94-a6e2-8769137b06c4","Type":"ContainerStarted","Data":"66cbd4eceb4f5c696bd9e252b788bfd62112b86e548b1a1f75a743499b905327"} Feb 19 21:22:27 crc kubenswrapper[4886]: I0219 21:22:27.912741 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bp48t"] Feb 19 21:22:27 crc kubenswrapper[4886]: I0219 21:22:27.914587 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bp48t" Feb 19 21:22:27 crc kubenswrapper[4886]: I0219 21:22:27.917075 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 19 21:22:27 crc kubenswrapper[4886]: I0219 21:22:27.917938 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 21:22:27 crc kubenswrapper[4886]: I0219 21:22:27.924063 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bp48t"] Feb 19 21:22:27 crc kubenswrapper[4886]: I0219 21:22:27.930580 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rtmrp" Feb 19 21:22:27 crc kubenswrapper[4886]: I0219 21:22:27.947033 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e60f29c-ef3b-4733-a6d8-92cd74e10eac-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bp48t\" (UID: \"5e60f29c-ef3b-4733-a6d8-92cd74e10eac\") " pod="openstack/nova-cell0-conductor-db-sync-bp48t" Feb 19 21:22:27 crc kubenswrapper[4886]: I0219 21:22:27.947183 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgtnh\" (UniqueName: \"kubernetes.io/projected/5e60f29c-ef3b-4733-a6d8-92cd74e10eac-kube-api-access-vgtnh\") pod \"nova-cell0-conductor-db-sync-bp48t\" (UID: \"5e60f29c-ef3b-4733-a6d8-92cd74e10eac\") " pod="openstack/nova-cell0-conductor-db-sync-bp48t" Feb 19 21:22:27 crc kubenswrapper[4886]: I0219 21:22:27.947243 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e60f29c-ef3b-4733-a6d8-92cd74e10eac-config-data\") pod \"nova-cell0-conductor-db-sync-bp48t\" (UID: \"5e60f29c-ef3b-4733-a6d8-92cd74e10eac\") " pod="openstack/nova-cell0-conductor-db-sync-bp48t" Feb 19 21:22:27 crc kubenswrapper[4886]: I0219 21:22:27.947305 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e60f29c-ef3b-4733-a6d8-92cd74e10eac-scripts\") pod \"nova-cell0-conductor-db-sync-bp48t\" (UID: \"5e60f29c-ef3b-4733-a6d8-92cd74e10eac\") " pod="openstack/nova-cell0-conductor-db-sync-bp48t" Feb 19 21:22:28 crc kubenswrapper[4886]: I0219 21:22:28.049730 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgtnh\" (UniqueName: \"kubernetes.io/projected/5e60f29c-ef3b-4733-a6d8-92cd74e10eac-kube-api-access-vgtnh\") pod \"nova-cell0-conductor-db-sync-bp48t\" (UID: \"5e60f29c-ef3b-4733-a6d8-92cd74e10eac\") " pod="openstack/nova-cell0-conductor-db-sync-bp48t" Feb 19 21:22:28 crc kubenswrapper[4886]: I0219 21:22:28.049974 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e60f29c-ef3b-4733-a6d8-92cd74e10eac-config-data\") pod \"nova-cell0-conductor-db-sync-bp48t\" (UID: \"5e60f29c-ef3b-4733-a6d8-92cd74e10eac\") " pod="openstack/nova-cell0-conductor-db-sync-bp48t" Feb 19 21:22:28 crc kubenswrapper[4886]: I0219 21:22:28.050062 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e60f29c-ef3b-4733-a6d8-92cd74e10eac-scripts\") pod \"nova-cell0-conductor-db-sync-bp48t\" (UID: \"5e60f29c-ef3b-4733-a6d8-92cd74e10eac\") " pod="openstack/nova-cell0-conductor-db-sync-bp48t" Feb 19 21:22:28 crc kubenswrapper[4886]: I0219 21:22:28.050123 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e60f29c-ef3b-4733-a6d8-92cd74e10eac-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bp48t\" (UID: \"5e60f29c-ef3b-4733-a6d8-92cd74e10eac\") " pod="openstack/nova-cell0-conductor-db-sync-bp48t" Feb 19 21:22:28 crc kubenswrapper[4886]: I0219 21:22:28.057909 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e60f29c-ef3b-4733-a6d8-92cd74e10eac-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bp48t\" (UID: \"5e60f29c-ef3b-4733-a6d8-92cd74e10eac\") " pod="openstack/nova-cell0-conductor-db-sync-bp48t" Feb 19 21:22:28 crc kubenswrapper[4886]: I0219 21:22:28.059238 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e60f29c-ef3b-4733-a6d8-92cd74e10eac-scripts\") pod \"nova-cell0-conductor-db-sync-bp48t\" (UID: \"5e60f29c-ef3b-4733-a6d8-92cd74e10eac\") " pod="openstack/nova-cell0-conductor-db-sync-bp48t" Feb 19 21:22:28 crc kubenswrapper[4886]: I0219 21:22:28.063872 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e60f29c-ef3b-4733-a6d8-92cd74e10eac-config-data\") pod \"nova-cell0-conductor-db-sync-bp48t\" (UID: \"5e60f29c-ef3b-4733-a6d8-92cd74e10eac\") " pod="openstack/nova-cell0-conductor-db-sync-bp48t" Feb 19 21:22:28 crc kubenswrapper[4886]: I0219 21:22:28.070727 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgtnh\" (UniqueName: \"kubernetes.io/projected/5e60f29c-ef3b-4733-a6d8-92cd74e10eac-kube-api-access-vgtnh\") pod \"nova-cell0-conductor-db-sync-bp48t\" (UID: \"5e60f29c-ef3b-4733-a6d8-92cd74e10eac\") " pod="openstack/nova-cell0-conductor-db-sync-bp48t" Feb 19 21:22:28 crc kubenswrapper[4886]: I0219 21:22:28.238249 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bp48t" Feb 19 21:22:28 crc kubenswrapper[4886]: I0219 21:22:28.268203 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"323e6ff6-e671-4d94-a6e2-8769137b06c4","Type":"ContainerStarted","Data":"e3eb0020802a859c2632d3e050dc2924af992e3f0353e05c7b8ebe746bd4ef81"} Feb 19 21:22:28 crc kubenswrapper[4886]: W0219 21:22:28.822643 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e60f29c_ef3b_4733_a6d8_92cd74e10eac.slice/crio-2d1740955ce0428711122416e087a512564c2bf18097784bfa1bcc1fc732ea87 WatchSource:0}: Error finding container 2d1740955ce0428711122416e087a512564c2bf18097784bfa1bcc1fc732ea87: Status 404 returned error can't find the container with id 2d1740955ce0428711122416e087a512564c2bf18097784bfa1bcc1fc732ea87 Feb 19 21:22:28 crc kubenswrapper[4886]: I0219 21:22:28.841503 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bp48t"] Feb 19 21:22:29 crc kubenswrapper[4886]: I0219 21:22:29.286454 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bp48t" event={"ID":"5e60f29c-ef3b-4733-a6d8-92cd74e10eac","Type":"ContainerStarted","Data":"2d1740955ce0428711122416e087a512564c2bf18097784bfa1bcc1fc732ea87"} Feb 19 21:22:30 crc kubenswrapper[4886]: I0219 21:22:30.321789 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"323e6ff6-e671-4d94-a6e2-8769137b06c4","Type":"ContainerStarted","Data":"95eb2858c50e2d45d9328580e592e0e510883d13bb882d07d085fb0ec89ad32a"} Feb 19 21:22:30 crc kubenswrapper[4886]: I0219 21:22:30.322464 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 21:22:30 crc kubenswrapper[4886]: I0219 21:22:30.347067 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.1926995 podStartE2EDuration="6.347051261s" podCreationTimestamp="2026-02-19 21:22:24 +0000 UTC" firstStartedPulling="2026-02-19 21:22:25.470097521 +0000 UTC m=+1376.097940571" lastFinishedPulling="2026-02-19 21:22:29.624449282 +0000 UTC m=+1380.252292332" observedRunningTime="2026-02-19 21:22:30.338181022 +0000 UTC m=+1380.966024152" watchObservedRunningTime="2026-02-19 21:22:30.347051261 +0000 UTC m=+1380.974894311" Feb 19 21:22:33 crc kubenswrapper[4886]: I0219 21:22:33.112973 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:22:33 crc kubenswrapper[4886]: I0219 21:22:33.114996 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="323e6ff6-e671-4d94-a6e2-8769137b06c4" containerName="sg-core" containerID="cri-o://e3eb0020802a859c2632d3e050dc2924af992e3f0353e05c7b8ebe746bd4ef81" gracePeriod=30 Feb 19 21:22:33 crc kubenswrapper[4886]: I0219 21:22:33.115132 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="323e6ff6-e671-4d94-a6e2-8769137b06c4" containerName="proxy-httpd" containerID="cri-o://95eb2858c50e2d45d9328580e592e0e510883d13bb882d07d085fb0ec89ad32a" gracePeriod=30 Feb 19 21:22:33 crc kubenswrapper[4886]: I0219 21:22:33.115147 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="323e6ff6-e671-4d94-a6e2-8769137b06c4" containerName="ceilometer-central-agent" containerID="cri-o://797d433f03d66f9bae1b388b862415480ce29a01b42eb5ea80a46308fe003120" gracePeriod=30 Feb 19 21:22:33 crc kubenswrapper[4886]: I0219 21:22:33.115124 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="323e6ff6-e671-4d94-a6e2-8769137b06c4" containerName="ceilometer-notification-agent" containerID="cri-o://66cbd4eceb4f5c696bd9e252b788bfd62112b86e548b1a1f75a743499b905327" gracePeriod=30 Feb 19 21:22:33 crc kubenswrapper[4886]: I0219 21:22:33.371984 4886 generic.go:334] "Generic (PLEG): container finished" podID="323e6ff6-e671-4d94-a6e2-8769137b06c4" containerID="95eb2858c50e2d45d9328580e592e0e510883d13bb882d07d085fb0ec89ad32a" exitCode=0 Feb 19 21:22:33 crc kubenswrapper[4886]: I0219 21:22:33.372014 4886 generic.go:334] "Generic (PLEG): container finished" podID="323e6ff6-e671-4d94-a6e2-8769137b06c4" containerID="e3eb0020802a859c2632d3e050dc2924af992e3f0353e05c7b8ebe746bd4ef81" exitCode=2 Feb 19 21:22:33 crc kubenswrapper[4886]: I0219 21:22:33.372033 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"323e6ff6-e671-4d94-a6e2-8769137b06c4","Type":"ContainerDied","Data":"95eb2858c50e2d45d9328580e592e0e510883d13bb882d07d085fb0ec89ad32a"} Feb 19 21:22:33 crc kubenswrapper[4886]: I0219 21:22:33.372060 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"323e6ff6-e671-4d94-a6e2-8769137b06c4","Type":"ContainerDied","Data":"e3eb0020802a859c2632d3e050dc2924af992e3f0353e05c7b8ebe746bd4ef81"} Feb 19 21:22:34 crc kubenswrapper[4886]: I0219 21:22:34.387350 4886 generic.go:334] "Generic (PLEG): container finished" podID="323e6ff6-e671-4d94-a6e2-8769137b06c4" containerID="66cbd4eceb4f5c696bd9e252b788bfd62112b86e548b1a1f75a743499b905327" exitCode=0 Feb 19 21:22:34 crc kubenswrapper[4886]: I0219 21:22:34.387677 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"323e6ff6-e671-4d94-a6e2-8769137b06c4","Type":"ContainerDied","Data":"66cbd4eceb4f5c696bd9e252b788bfd62112b86e548b1a1f75a743499b905327"} Feb 19 21:22:36 crc kubenswrapper[4886]: I0219 21:22:36.479224 4886 generic.go:334] "Generic (PLEG): container finished" podID="323e6ff6-e671-4d94-a6e2-8769137b06c4" containerID="797d433f03d66f9bae1b388b862415480ce29a01b42eb5ea80a46308fe003120" exitCode=0 Feb 19 21:22:36 crc kubenswrapper[4886]: I0219 21:22:36.479296 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"323e6ff6-e671-4d94-a6e2-8769137b06c4","Type":"ContainerDied","Data":"797d433f03d66f9bae1b388b862415480ce29a01b42eb5ea80a46308fe003120"} Feb 19 21:22:36 crc kubenswrapper[4886]: I0219 21:22:36.876821 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:22:36 crc kubenswrapper[4886]: I0219 21:22:36.988495 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/323e6ff6-e671-4d94-a6e2-8769137b06c4-sg-core-conf-yaml\") pod \"323e6ff6-e671-4d94-a6e2-8769137b06c4\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " Feb 19 21:22:36 crc kubenswrapper[4886]: I0219 21:22:36.988557 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/323e6ff6-e671-4d94-a6e2-8769137b06c4-scripts\") pod \"323e6ff6-e671-4d94-a6e2-8769137b06c4\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " Feb 19 21:22:36 crc kubenswrapper[4886]: I0219 21:22:36.988710 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/323e6ff6-e671-4d94-a6e2-8769137b06c4-run-httpd\") pod \"323e6ff6-e671-4d94-a6e2-8769137b06c4\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " Feb 19 21:22:36 crc kubenswrapper[4886]: I0219 21:22:36.988776 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323e6ff6-e671-4d94-a6e2-8769137b06c4-combined-ca-bundle\") pod \"323e6ff6-e671-4d94-a6e2-8769137b06c4\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " Feb 19 21:22:36 crc kubenswrapper[4886]: I0219 21:22:36.988840 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs72d\" (UniqueName: \"kubernetes.io/projected/323e6ff6-e671-4d94-a6e2-8769137b06c4-kube-api-access-fs72d\") pod \"323e6ff6-e671-4d94-a6e2-8769137b06c4\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " Feb 19 21:22:36 crc kubenswrapper[4886]: I0219 21:22:36.988874 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/323e6ff6-e671-4d94-a6e2-8769137b06c4-log-httpd\") pod \"323e6ff6-e671-4d94-a6e2-8769137b06c4\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " Feb 19 21:22:36 crc kubenswrapper[4886]: I0219 21:22:36.988962 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323e6ff6-e671-4d94-a6e2-8769137b06c4-config-data\") pod \"323e6ff6-e671-4d94-a6e2-8769137b06c4\" (UID: \"323e6ff6-e671-4d94-a6e2-8769137b06c4\") " Feb 19 21:22:36 crc kubenswrapper[4886]: I0219 21:22:36.989554 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/323e6ff6-e671-4d94-a6e2-8769137b06c4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "323e6ff6-e671-4d94-a6e2-8769137b06c4" (UID: "323e6ff6-e671-4d94-a6e2-8769137b06c4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:22:36 crc kubenswrapper[4886]: I0219 21:22:36.989773 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/323e6ff6-e671-4d94-a6e2-8769137b06c4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "323e6ff6-e671-4d94-a6e2-8769137b06c4" (UID: "323e6ff6-e671-4d94-a6e2-8769137b06c4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:22:36 crc kubenswrapper[4886]: I0219 21:22:36.992515 4886 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/323e6ff6-e671-4d94-a6e2-8769137b06c4-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:36 crc kubenswrapper[4886]: I0219 21:22:36.992547 4886 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/323e6ff6-e671-4d94-a6e2-8769137b06c4-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:36 crc kubenswrapper[4886]: I0219 21:22:36.994700 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/323e6ff6-e671-4d94-a6e2-8769137b06c4-scripts" (OuterVolumeSpecName: "scripts") pod "323e6ff6-e671-4d94-a6e2-8769137b06c4" (UID: "323e6ff6-e671-4d94-a6e2-8769137b06c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:36 crc kubenswrapper[4886]: I0219 21:22:36.995790 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/323e6ff6-e671-4d94-a6e2-8769137b06c4-kube-api-access-fs72d" (OuterVolumeSpecName: "kube-api-access-fs72d") pod "323e6ff6-e671-4d94-a6e2-8769137b06c4" (UID: "323e6ff6-e671-4d94-a6e2-8769137b06c4"). InnerVolumeSpecName "kube-api-access-fs72d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.031575 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/323e6ff6-e671-4d94-a6e2-8769137b06c4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "323e6ff6-e671-4d94-a6e2-8769137b06c4" (UID: "323e6ff6-e671-4d94-a6e2-8769137b06c4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.084857 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/323e6ff6-e671-4d94-a6e2-8769137b06c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "323e6ff6-e671-4d94-a6e2-8769137b06c4" (UID: "323e6ff6-e671-4d94-a6e2-8769137b06c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.094794 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs72d\" (UniqueName: \"kubernetes.io/projected/323e6ff6-e671-4d94-a6e2-8769137b06c4-kube-api-access-fs72d\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.094827 4886 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/323e6ff6-e671-4d94-a6e2-8769137b06c4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.094841 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/323e6ff6-e671-4d94-a6e2-8769137b06c4-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.094853 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323e6ff6-e671-4d94-a6e2-8769137b06c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.123428 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/323e6ff6-e671-4d94-a6e2-8769137b06c4-config-data" (OuterVolumeSpecName: "config-data") pod "323e6ff6-e671-4d94-a6e2-8769137b06c4" (UID: "323e6ff6-e671-4d94-a6e2-8769137b06c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.197011 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323e6ff6-e671-4d94-a6e2-8769137b06c4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.497250 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bp48t" event={"ID":"5e60f29c-ef3b-4733-a6d8-92cd74e10eac","Type":"ContainerStarted","Data":"bbc1f0f3c2251812647c9ba8c2c1c9d006361376576f8dbf661f426aa2f679e9"} Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.502417 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"323e6ff6-e671-4d94-a6e2-8769137b06c4","Type":"ContainerDied","Data":"40a41b6a0d67a3d113b63dc1ed37c3ec407872f0f1dc2330250328af4099017c"} Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.502470 4886 scope.go:117] "RemoveContainer" containerID="95eb2858c50e2d45d9328580e592e0e510883d13bb882d07d085fb0ec89ad32a" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.502626 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.527122 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-bp48t" podStartSLOduration=2.197788462 podStartE2EDuration="10.527104963s" podCreationTimestamp="2026-02-19 21:22:27 +0000 UTC" firstStartedPulling="2026-02-19 21:22:28.825745098 +0000 UTC m=+1379.453588148" lastFinishedPulling="2026-02-19 21:22:37.155061599 +0000 UTC m=+1387.782904649" observedRunningTime="2026-02-19 21:22:37.515936118 +0000 UTC m=+1388.143779178" watchObservedRunningTime="2026-02-19 21:22:37.527104963 +0000 UTC m=+1388.154948003" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.529582 4886 scope.go:117] "RemoveContainer" containerID="e3eb0020802a859c2632d3e050dc2924af992e3f0353e05c7b8ebe746bd4ef81" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.550992 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.562940 4886 scope.go:117] "RemoveContainer" containerID="66cbd4eceb4f5c696bd9e252b788bfd62112b86e548b1a1f75a743499b905327" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.576384 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.587464 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:22:37 crc kubenswrapper[4886]: E0219 21:22:37.587920 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="323e6ff6-e671-4d94-a6e2-8769137b06c4" containerName="ceilometer-central-agent" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.587937 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="323e6ff6-e671-4d94-a6e2-8769137b06c4" containerName="ceilometer-central-agent" Feb 19 21:22:37 crc kubenswrapper[4886]: E0219 21:22:37.587962 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="323e6ff6-e671-4d94-a6e2-8769137b06c4" containerName="sg-core" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.587969 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="323e6ff6-e671-4d94-a6e2-8769137b06c4" containerName="sg-core" Feb 19 21:22:37 crc kubenswrapper[4886]: E0219 21:22:37.587980 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="323e6ff6-e671-4d94-a6e2-8769137b06c4" containerName="proxy-httpd" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.587985 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="323e6ff6-e671-4d94-a6e2-8769137b06c4" containerName="proxy-httpd" Feb 19 21:22:37 crc kubenswrapper[4886]: E0219 21:22:37.588003 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="323e6ff6-e671-4d94-a6e2-8769137b06c4" containerName="ceilometer-notification-agent" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.588008 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="323e6ff6-e671-4d94-a6e2-8769137b06c4" containerName="ceilometer-notification-agent" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.588198 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="323e6ff6-e671-4d94-a6e2-8769137b06c4" containerName="proxy-httpd" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.588213 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="323e6ff6-e671-4d94-a6e2-8769137b06c4" containerName="sg-core" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.588234 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="323e6ff6-e671-4d94-a6e2-8769137b06c4" containerName="ceilometer-notification-agent" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.588251 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="323e6ff6-e671-4d94-a6e2-8769137b06c4" containerName="ceilometer-central-agent" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.590310 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.598146 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.621180 4886 scope.go:117] "RemoveContainer" containerID="797d433f03d66f9bae1b388b862415480ce29a01b42eb5ea80a46308fe003120" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.622589 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.622931 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 21:22:37 crc kubenswrapper[4886]: E0219 21:22:37.692495 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod323e6ff6_e671_4d94_a6e2_8769137b06c4.slice\": RecentStats: unable to find data in memory cache]" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.731141 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38122a6-da13-472b-8989-14d4a8c0d787-run-httpd\") pod \"ceilometer-0\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " pod="openstack/ceilometer-0" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.731195 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkpwc\" (UniqueName: \"kubernetes.io/projected/c38122a6-da13-472b-8989-14d4a8c0d787-kube-api-access-lkpwc\") pod \"ceilometer-0\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " pod="openstack/ceilometer-0" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.731286 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c38122a6-da13-472b-8989-14d4a8c0d787-scripts\") pod \"ceilometer-0\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " pod="openstack/ceilometer-0" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.731427 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38122a6-da13-472b-8989-14d4a8c0d787-log-httpd\") pod \"ceilometer-0\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " pod="openstack/ceilometer-0" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.731467 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c38122a6-da13-472b-8989-14d4a8c0d787-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " pod="openstack/ceilometer-0" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.731522 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c38122a6-da13-472b-8989-14d4a8c0d787-config-data\") pod \"ceilometer-0\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " pod="openstack/ceilometer-0" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.731607 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38122a6-da13-472b-8989-14d4a8c0d787-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " pod="openstack/ceilometer-0" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.833626 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38122a6-da13-472b-8989-14d4a8c0d787-log-httpd\") pod \"ceilometer-0\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " pod="openstack/ceilometer-0" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.833690 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c38122a6-da13-472b-8989-14d4a8c0d787-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " pod="openstack/ceilometer-0" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.833739 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c38122a6-da13-472b-8989-14d4a8c0d787-config-data\") pod \"ceilometer-0\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " pod="openstack/ceilometer-0" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.833806 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38122a6-da13-472b-8989-14d4a8c0d787-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " pod="openstack/ceilometer-0" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.833893 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38122a6-da13-472b-8989-14d4a8c0d787-run-httpd\") pod \"ceilometer-0\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " pod="openstack/ceilometer-0" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.833917 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkpwc\" (UniqueName: \"kubernetes.io/projected/c38122a6-da13-472b-8989-14d4a8c0d787-kube-api-access-lkpwc\") pod \"ceilometer-0\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " pod="openstack/ceilometer-0" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.833965 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c38122a6-da13-472b-8989-14d4a8c0d787-scripts\") pod \"ceilometer-0\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " pod="openstack/ceilometer-0" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.834235 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38122a6-da13-472b-8989-14d4a8c0d787-log-httpd\") pod \"ceilometer-0\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " pod="openstack/ceilometer-0" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.835029 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38122a6-da13-472b-8989-14d4a8c0d787-run-httpd\") pod \"ceilometer-0\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " pod="openstack/ceilometer-0" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.840707 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c38122a6-da13-472b-8989-14d4a8c0d787-config-data\") pod \"ceilometer-0\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " pod="openstack/ceilometer-0" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.840778 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38122a6-da13-472b-8989-14d4a8c0d787-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " pod="openstack/ceilometer-0" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.840842 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c38122a6-da13-472b-8989-14d4a8c0d787-scripts\") pod \"ceilometer-0\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " pod="openstack/ceilometer-0" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.847045 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c38122a6-da13-472b-8989-14d4a8c0d787-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " pod="openstack/ceilometer-0" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.857427 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkpwc\" (UniqueName: \"kubernetes.io/projected/c38122a6-da13-472b-8989-14d4a8c0d787-kube-api-access-lkpwc\") pod \"ceilometer-0\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " pod="openstack/ceilometer-0" Feb 19 21:22:37 crc kubenswrapper[4886]: I0219 21:22:37.942859 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:22:38 crc kubenswrapper[4886]: I0219 21:22:38.453239 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:22:38 crc kubenswrapper[4886]: I0219 21:22:38.514762 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38122a6-da13-472b-8989-14d4a8c0d787","Type":"ContainerStarted","Data":"34361f59819b1e30ae465161c22e6b12e903db6447a4e3f81ded3c0b13caba48"} Feb 19 21:22:38 crc kubenswrapper[4886]: I0219 21:22:38.617435 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="323e6ff6-e671-4d94-a6e2-8769137b06c4" path="/var/lib/kubelet/pods/323e6ff6-e671-4d94-a6e2-8769137b06c4/volumes" Feb 19 21:22:39 crc kubenswrapper[4886]: I0219 21:22:39.542970 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38122a6-da13-472b-8989-14d4a8c0d787","Type":"ContainerStarted","Data":"7c0199a62cd01659bc2bab217e45abb50d8736caa1d532925e3809d6a52a8ddd"} Feb 19 21:22:41 crc kubenswrapper[4886]: I0219 21:22:41.578039 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38122a6-da13-472b-8989-14d4a8c0d787","Type":"ContainerStarted","Data":"a9f867fcaf17c5bb8a7ca0c6d641c1c4862ea9d7e24f2b7115be271b02bcd19e"} Feb 19 21:22:42 crc kubenswrapper[4886]: I0219 21:22:42.592213 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38122a6-da13-472b-8989-14d4a8c0d787","Type":"ContainerStarted","Data":"a1a8eeee27ae89c20c8975f497072d6aec4e64e57f41d860a49b5ec88d2d9057"} Feb 19 21:22:43 crc kubenswrapper[4886]: I0219 21:22:43.096974 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:22:44 crc kubenswrapper[4886]: I0219 21:22:44.615571 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c38122a6-da13-472b-8989-14d4a8c0d787" containerName="ceilometer-central-agent" containerID="cri-o://7c0199a62cd01659bc2bab217e45abb50d8736caa1d532925e3809d6a52a8ddd" gracePeriod=30 Feb 19 21:22:44 crc kubenswrapper[4886]: I0219 21:22:44.616656 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c38122a6-da13-472b-8989-14d4a8c0d787" containerName="proxy-httpd" containerID="cri-o://8a3ed795a13828e95eb0ba7f9d867dd93bc61ee346ce5109e0bfd811d164a195" gracePeriod=30 Feb 19 21:22:44 crc kubenswrapper[4886]: I0219 21:22:44.616728 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c38122a6-da13-472b-8989-14d4a8c0d787" containerName="sg-core" containerID="cri-o://a1a8eeee27ae89c20c8975f497072d6aec4e64e57f41d860a49b5ec88d2d9057" gracePeriod=30 Feb 19 21:22:44 crc kubenswrapper[4886]: I0219 21:22:44.616780 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c38122a6-da13-472b-8989-14d4a8c0d787" containerName="ceilometer-notification-agent" containerID="cri-o://a9f867fcaf17c5bb8a7ca0c6d641c1c4862ea9d7e24f2b7115be271b02bcd19e" gracePeriod=30 Feb 19 21:22:44 crc kubenswrapper[4886]: I0219 21:22:44.622579 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 21:22:44 crc kubenswrapper[4886]: I0219 21:22:44.622649 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38122a6-da13-472b-8989-14d4a8c0d787","Type":"ContainerStarted","Data":"8a3ed795a13828e95eb0ba7f9d867dd93bc61ee346ce5109e0bfd811d164a195"} Feb 19 21:22:44 crc kubenswrapper[4886]: I0219 21:22:44.654194 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.164030839 podStartE2EDuration="7.654170479s" podCreationTimestamp="2026-02-19 21:22:37 +0000 UTC" firstStartedPulling="2026-02-19 21:22:38.451845297 +0000 UTC m=+1389.079688347" lastFinishedPulling="2026-02-19 21:22:43.941984937 +0000 UTC m=+1394.569827987" observedRunningTime="2026-02-19 21:22:44.64043437 +0000 UTC m=+1395.268277450" watchObservedRunningTime="2026-02-19 21:22:44.654170479 +0000 UTC m=+1395.282013529" Feb 19 21:22:45 crc kubenswrapper[4886]: I0219 21:22:45.628505 4886 generic.go:334] "Generic (PLEG): container finished" podID="c38122a6-da13-472b-8989-14d4a8c0d787" containerID="8a3ed795a13828e95eb0ba7f9d867dd93bc61ee346ce5109e0bfd811d164a195" exitCode=0 Feb 19 21:22:45 crc kubenswrapper[4886]: I0219 21:22:45.628844 4886 generic.go:334] "Generic (PLEG): container finished" podID="c38122a6-da13-472b-8989-14d4a8c0d787" containerID="a1a8eeee27ae89c20c8975f497072d6aec4e64e57f41d860a49b5ec88d2d9057" exitCode=2 Feb 19 21:22:45 crc kubenswrapper[4886]: I0219 21:22:45.628868 4886 generic.go:334] "Generic (PLEG): container finished" podID="c38122a6-da13-472b-8989-14d4a8c0d787" containerID="a9f867fcaf17c5bb8a7ca0c6d641c1c4862ea9d7e24f2b7115be271b02bcd19e" exitCode=0 Feb 19 21:22:45 crc kubenswrapper[4886]: I0219 21:22:45.628656 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38122a6-da13-472b-8989-14d4a8c0d787","Type":"ContainerDied","Data":"8a3ed795a13828e95eb0ba7f9d867dd93bc61ee346ce5109e0bfd811d164a195"} Feb 19 21:22:45 crc kubenswrapper[4886]: I0219 21:22:45.628922 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38122a6-da13-472b-8989-14d4a8c0d787","Type":"ContainerDied","Data":"a1a8eeee27ae89c20c8975f497072d6aec4e64e57f41d860a49b5ec88d2d9057"} Feb 19 21:22:45 crc kubenswrapper[4886]: I0219 21:22:45.628946 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38122a6-da13-472b-8989-14d4a8c0d787","Type":"ContainerDied","Data":"a9f867fcaf17c5bb8a7ca0c6d641c1c4862ea9d7e24f2b7115be271b02bcd19e"} Feb 19 21:22:49 crc kubenswrapper[4886]: I0219 21:22:49.678178 4886 generic.go:334] "Generic (PLEG): container finished" podID="c38122a6-da13-472b-8989-14d4a8c0d787" containerID="7c0199a62cd01659bc2bab217e45abb50d8736caa1d532925e3809d6a52a8ddd" exitCode=0 Feb 19 21:22:49 crc kubenswrapper[4886]: I0219 21:22:49.679014 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38122a6-da13-472b-8989-14d4a8c0d787","Type":"ContainerDied","Data":"7c0199a62cd01659bc2bab217e45abb50d8736caa1d532925e3809d6a52a8ddd"} Feb 19 21:22:49 crc kubenswrapper[4886]: I0219 21:22:49.679807 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38122a6-da13-472b-8989-14d4a8c0d787","Type":"ContainerDied","Data":"34361f59819b1e30ae465161c22e6b12e903db6447a4e3f81ded3c0b13caba48"} Feb 19 21:22:49 crc kubenswrapper[4886]: I0219 21:22:49.679934 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34361f59819b1e30ae465161c22e6b12e903db6447a4e3f81ded3c0b13caba48" Feb 19 21:22:49 crc kubenswrapper[4886]: I0219 21:22:49.742295 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:22:49 crc kubenswrapper[4886]: I0219 21:22:49.918042 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c38122a6-da13-472b-8989-14d4a8c0d787-scripts\") pod \"c38122a6-da13-472b-8989-14d4a8c0d787\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " Feb 19 21:22:49 crc kubenswrapper[4886]: I0219 21:22:49.918394 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c38122a6-da13-472b-8989-14d4a8c0d787-sg-core-conf-yaml\") pod \"c38122a6-da13-472b-8989-14d4a8c0d787\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " Feb 19 21:22:49 crc kubenswrapper[4886]: I0219 21:22:49.918460 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkpwc\" (UniqueName: \"kubernetes.io/projected/c38122a6-da13-472b-8989-14d4a8c0d787-kube-api-access-lkpwc\") pod \"c38122a6-da13-472b-8989-14d4a8c0d787\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " Feb 19 21:22:49 crc kubenswrapper[4886]: I0219 21:22:49.918519 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38122a6-da13-472b-8989-14d4a8c0d787-run-httpd\") pod \"c38122a6-da13-472b-8989-14d4a8c0d787\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " Feb 19 21:22:49 crc kubenswrapper[4886]: I0219 21:22:49.918607 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38122a6-da13-472b-8989-14d4a8c0d787-log-httpd\") pod \"c38122a6-da13-472b-8989-14d4a8c0d787\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " Feb 19 21:22:49 crc kubenswrapper[4886]: I0219 21:22:49.918749 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38122a6-da13-472b-8989-14d4a8c0d787-combined-ca-bundle\") pod \"c38122a6-da13-472b-8989-14d4a8c0d787\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " Feb 19 21:22:49 crc kubenswrapper[4886]: I0219 21:22:49.918785 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c38122a6-da13-472b-8989-14d4a8c0d787-config-data\") pod \"c38122a6-da13-472b-8989-14d4a8c0d787\" (UID: \"c38122a6-da13-472b-8989-14d4a8c0d787\") " Feb 19 21:22:49 crc kubenswrapper[4886]: I0219 21:22:49.919414 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c38122a6-da13-472b-8989-14d4a8c0d787-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c38122a6-da13-472b-8989-14d4a8c0d787" (UID: "c38122a6-da13-472b-8989-14d4a8c0d787"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:22:49 crc kubenswrapper[4886]: I0219 21:22:49.919691 4886 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38122a6-da13-472b-8989-14d4a8c0d787-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:49 crc kubenswrapper[4886]: I0219 21:22:49.919838 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c38122a6-da13-472b-8989-14d4a8c0d787-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c38122a6-da13-472b-8989-14d4a8c0d787" (UID: "c38122a6-da13-472b-8989-14d4a8c0d787"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:22:49 crc kubenswrapper[4886]: I0219 21:22:49.923757 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c38122a6-da13-472b-8989-14d4a8c0d787-scripts" (OuterVolumeSpecName: "scripts") pod "c38122a6-da13-472b-8989-14d4a8c0d787" (UID: "c38122a6-da13-472b-8989-14d4a8c0d787"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:49 crc kubenswrapper[4886]: I0219 21:22:49.924889 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c38122a6-da13-472b-8989-14d4a8c0d787-kube-api-access-lkpwc" (OuterVolumeSpecName: "kube-api-access-lkpwc") pod "c38122a6-da13-472b-8989-14d4a8c0d787" (UID: "c38122a6-da13-472b-8989-14d4a8c0d787"). InnerVolumeSpecName "kube-api-access-lkpwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:22:49 crc kubenswrapper[4886]: I0219 21:22:49.981070 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c38122a6-da13-472b-8989-14d4a8c0d787-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c38122a6-da13-472b-8989-14d4a8c0d787" (UID: "c38122a6-da13-472b-8989-14d4a8c0d787"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.022152 4886 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c38122a6-da13-472b-8989-14d4a8c0d787-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.022186 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkpwc\" (UniqueName: \"kubernetes.io/projected/c38122a6-da13-472b-8989-14d4a8c0d787-kube-api-access-lkpwc\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.022202 4886 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38122a6-da13-472b-8989-14d4a8c0d787-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.022213 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c38122a6-da13-472b-8989-14d4a8c0d787-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.044724 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c38122a6-da13-472b-8989-14d4a8c0d787-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c38122a6-da13-472b-8989-14d4a8c0d787" (UID: "c38122a6-da13-472b-8989-14d4a8c0d787"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.050883 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c38122a6-da13-472b-8989-14d4a8c0d787-config-data" (OuterVolumeSpecName: "config-data") pod "c38122a6-da13-472b-8989-14d4a8c0d787" (UID: "c38122a6-da13-472b-8989-14d4a8c0d787"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.124413 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38122a6-da13-472b-8989-14d4a8c0d787-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.124456 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c38122a6-da13-472b-8989-14d4a8c0d787-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.689887 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.719720 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.742855 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.763885 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:22:50 crc kubenswrapper[4886]: E0219 21:22:50.764589 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c38122a6-da13-472b-8989-14d4a8c0d787" containerName="ceilometer-central-agent" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.764615 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c38122a6-da13-472b-8989-14d4a8c0d787" containerName="ceilometer-central-agent" Feb 19 21:22:50 crc kubenswrapper[4886]: E0219 21:22:50.764641 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c38122a6-da13-472b-8989-14d4a8c0d787" containerName="proxy-httpd" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.764651 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c38122a6-da13-472b-8989-14d4a8c0d787" containerName="proxy-httpd" Feb 19 21:22:50 crc kubenswrapper[4886]: E0219 21:22:50.764661 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c38122a6-da13-472b-8989-14d4a8c0d787" containerName="sg-core" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.764668 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c38122a6-da13-472b-8989-14d4a8c0d787" containerName="sg-core" Feb 19 21:22:50 crc kubenswrapper[4886]: E0219 21:22:50.764687 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c38122a6-da13-472b-8989-14d4a8c0d787" containerName="ceilometer-notification-agent" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.764695 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c38122a6-da13-472b-8989-14d4a8c0d787" containerName="ceilometer-notification-agent" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.765012 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c38122a6-da13-472b-8989-14d4a8c0d787" containerName="sg-core" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.765042 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c38122a6-da13-472b-8989-14d4a8c0d787" containerName="proxy-httpd" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.765055 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c38122a6-da13-472b-8989-14d4a8c0d787" containerName="ceilometer-notification-agent" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.765070 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c38122a6-da13-472b-8989-14d4a8c0d787" containerName="ceilometer-central-agent" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.768053 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.770069 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.770310 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.775956 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.839497 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75c0d65b-3609-4dd4-a676-8a8baa757cfe-log-httpd\") pod \"ceilometer-0\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " pod="openstack/ceilometer-0" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.839556 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75c0d65b-3609-4dd4-a676-8a8baa757cfe-run-httpd\") pod \"ceilometer-0\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " pod="openstack/ceilometer-0" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.839759 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75c0d65b-3609-4dd4-a676-8a8baa757cfe-config-data\") pod \"ceilometer-0\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " pod="openstack/ceilometer-0" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.839801 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75c0d65b-3609-4dd4-a676-8a8baa757cfe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " pod="openstack/ceilometer-0" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.839820 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75c0d65b-3609-4dd4-a676-8a8baa757cfe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " pod="openstack/ceilometer-0" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.839841 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6nh8\" (UniqueName: \"kubernetes.io/projected/75c0d65b-3609-4dd4-a676-8a8baa757cfe-kube-api-access-b6nh8\") pod \"ceilometer-0\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " pod="openstack/ceilometer-0" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.839922 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75c0d65b-3609-4dd4-a676-8a8baa757cfe-scripts\") pod \"ceilometer-0\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " pod="openstack/ceilometer-0" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.942518 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75c0d65b-3609-4dd4-a676-8a8baa757cfe-config-data\") pod \"ceilometer-0\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " pod="openstack/ceilometer-0" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.942582 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75c0d65b-3609-4dd4-a676-8a8baa757cfe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " pod="openstack/ceilometer-0" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.942609 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75c0d65b-3609-4dd4-a676-8a8baa757cfe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " pod="openstack/ceilometer-0" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.942634 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6nh8\" (UniqueName: \"kubernetes.io/projected/75c0d65b-3609-4dd4-a676-8a8baa757cfe-kube-api-access-b6nh8\") pod \"ceilometer-0\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " pod="openstack/ceilometer-0" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.942659 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75c0d65b-3609-4dd4-a676-8a8baa757cfe-scripts\") pod \"ceilometer-0\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " pod="openstack/ceilometer-0" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.942707 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75c0d65b-3609-4dd4-a676-8a8baa757cfe-log-httpd\") pod \"ceilometer-0\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " pod="openstack/ceilometer-0" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.942730 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75c0d65b-3609-4dd4-a676-8a8baa757cfe-run-httpd\") pod \"ceilometer-0\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " pod="openstack/ceilometer-0" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.943196 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75c0d65b-3609-4dd4-a676-8a8baa757cfe-run-httpd\") pod \"ceilometer-0\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " pod="openstack/ceilometer-0" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.943758 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75c0d65b-3609-4dd4-a676-8a8baa757cfe-log-httpd\") pod \"ceilometer-0\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " pod="openstack/ceilometer-0" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.946838 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75c0d65b-3609-4dd4-a676-8a8baa757cfe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " pod="openstack/ceilometer-0" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.947241 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75c0d65b-3609-4dd4-a676-8a8baa757cfe-scripts\") pod \"ceilometer-0\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " pod="openstack/ceilometer-0" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.947282 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75c0d65b-3609-4dd4-a676-8a8baa757cfe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " pod="openstack/ceilometer-0" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.947488 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75c0d65b-3609-4dd4-a676-8a8baa757cfe-config-data\") pod \"ceilometer-0\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " pod="openstack/ceilometer-0" Feb 19 21:22:50 crc kubenswrapper[4886]: I0219 21:22:50.965199 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6nh8\" (UniqueName: \"kubernetes.io/projected/75c0d65b-3609-4dd4-a676-8a8baa757cfe-kube-api-access-b6nh8\") pod \"ceilometer-0\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " pod="openstack/ceilometer-0" Feb 19 21:22:51 crc kubenswrapper[4886]: I0219 21:22:51.099619 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:22:51 crc kubenswrapper[4886]: I0219 21:22:51.613598 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:22:51 crc kubenswrapper[4886]: I0219 21:22:51.700401 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75c0d65b-3609-4dd4-a676-8a8baa757cfe","Type":"ContainerStarted","Data":"f4088d9626e0061cbaa53c2be14951262a2d360a1678732eb7d6408eae1cc041"} Feb 19 21:22:52 crc kubenswrapper[4886]: I0219 21:22:52.614866 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c38122a6-da13-472b-8989-14d4a8c0d787" path="/var/lib/kubelet/pods/c38122a6-da13-472b-8989-14d4a8c0d787/volumes" Feb 19 21:22:52 crc kubenswrapper[4886]: I0219 21:22:52.725457 4886 generic.go:334] "Generic (PLEG): container finished" podID="5e60f29c-ef3b-4733-a6d8-92cd74e10eac" containerID="bbc1f0f3c2251812647c9ba8c2c1c9d006361376576f8dbf661f426aa2f679e9" exitCode=0 Feb 19 21:22:52 crc kubenswrapper[4886]: I0219 21:22:52.725520 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bp48t" event={"ID":"5e60f29c-ef3b-4733-a6d8-92cd74e10eac","Type":"ContainerDied","Data":"bbc1f0f3c2251812647c9ba8c2c1c9d006361376576f8dbf661f426aa2f679e9"} Feb 19 21:22:52 crc kubenswrapper[4886]: I0219 21:22:52.738578 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75c0d65b-3609-4dd4-a676-8a8baa757cfe","Type":"ContainerStarted","Data":"f98a37daf2d11a3c035b4f1deb3c09ce86d68609dd852e70f7d58d1daba67de4"} Feb 19 21:22:53 crc kubenswrapper[4886]: I0219 21:22:53.692070 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-8xp7p"] Feb 19 21:22:53 crc kubenswrapper[4886]: I0219 21:22:53.699832 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-8xp7p" Feb 19 21:22:53 crc kubenswrapper[4886]: I0219 21:22:53.703953 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-8xp7p"] Feb 19 21:22:53 crc kubenswrapper[4886]: I0219 21:22:53.763416 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75c0d65b-3609-4dd4-a676-8a8baa757cfe","Type":"ContainerStarted","Data":"9f34127cac6e731efb7bb7b587134de360fe4f46f5d0318789cbe627491f256e"} Feb 19 21:22:53 crc kubenswrapper[4886]: I0219 21:22:53.763513 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75c0d65b-3609-4dd4-a676-8a8baa757cfe","Type":"ContainerStarted","Data":"c0dd2bed96786f932ceec332c7021984c902d0889c7df7e2cb747a942cf38b76"} Feb 19 21:22:53 crc kubenswrapper[4886]: I0219 21:22:53.787549 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-3436-account-create-update-kk7fg"] Feb 19 21:22:53 crc kubenswrapper[4886]: I0219 21:22:53.789149 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-3436-account-create-update-kk7fg" Feb 19 21:22:53 crc kubenswrapper[4886]: I0219 21:22:53.791488 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 19 21:22:53 crc kubenswrapper[4886]: I0219 21:22:53.805005 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-3436-account-create-update-kk7fg"] Feb 19 21:22:53 crc kubenswrapper[4886]: I0219 21:22:53.822154 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be6627da-95f2-48b8-ba42-eae7018d98b5-operator-scripts\") pod \"aodh-db-create-8xp7p\" (UID: \"be6627da-95f2-48b8-ba42-eae7018d98b5\") " pod="openstack/aodh-db-create-8xp7p" Feb 19 21:22:53 crc kubenswrapper[4886]: I0219 21:22:53.822346 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwnh5\" (UniqueName: \"kubernetes.io/projected/be6627da-95f2-48b8-ba42-eae7018d98b5-kube-api-access-bwnh5\") pod \"aodh-db-create-8xp7p\" (UID: \"be6627da-95f2-48b8-ba42-eae7018d98b5\") " pod="openstack/aodh-db-create-8xp7p" Feb 19 21:22:53 crc kubenswrapper[4886]: I0219 21:22:53.924317 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwnh5\" (UniqueName: \"kubernetes.io/projected/be6627da-95f2-48b8-ba42-eae7018d98b5-kube-api-access-bwnh5\") pod \"aodh-db-create-8xp7p\" (UID: \"be6627da-95f2-48b8-ba42-eae7018d98b5\") " pod="openstack/aodh-db-create-8xp7p" Feb 19 21:22:53 crc kubenswrapper[4886]: I0219 21:22:53.924429 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffed57be-9be1-478b-b43b-c6d67de8630c-operator-scripts\") pod \"aodh-3436-account-create-update-kk7fg\" (UID: \"ffed57be-9be1-478b-b43b-c6d67de8630c\") " pod="openstack/aodh-3436-account-create-update-kk7fg" Feb 19 21:22:53 crc kubenswrapper[4886]: I0219 21:22:53.924468 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be6627da-95f2-48b8-ba42-eae7018d98b5-operator-scripts\") pod \"aodh-db-create-8xp7p\" (UID: \"be6627da-95f2-48b8-ba42-eae7018d98b5\") " pod="openstack/aodh-db-create-8xp7p" Feb 19 21:22:53 crc kubenswrapper[4886]: I0219 21:22:53.924489 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlxcc\" (UniqueName: \"kubernetes.io/projected/ffed57be-9be1-478b-b43b-c6d67de8630c-kube-api-access-rlxcc\") pod \"aodh-3436-account-create-update-kk7fg\" (UID: \"ffed57be-9be1-478b-b43b-c6d67de8630c\") " pod="openstack/aodh-3436-account-create-update-kk7fg" Feb 19 21:22:53 crc kubenswrapper[4886]: I0219 21:22:53.925505 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be6627da-95f2-48b8-ba42-eae7018d98b5-operator-scripts\") pod \"aodh-db-create-8xp7p\" (UID: \"be6627da-95f2-48b8-ba42-eae7018d98b5\") " pod="openstack/aodh-db-create-8xp7p" Feb 19 21:22:53 crc kubenswrapper[4886]: I0219 21:22:53.946215 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwnh5\" (UniqueName: \"kubernetes.io/projected/be6627da-95f2-48b8-ba42-eae7018d98b5-kube-api-access-bwnh5\") pod \"aodh-db-create-8xp7p\" (UID: \"be6627da-95f2-48b8-ba42-eae7018d98b5\") " pod="openstack/aodh-db-create-8xp7p" Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.028387 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffed57be-9be1-478b-b43b-c6d67de8630c-operator-scripts\") pod \"aodh-3436-account-create-update-kk7fg\" (UID: \"ffed57be-9be1-478b-b43b-c6d67de8630c\") " pod="openstack/aodh-3436-account-create-update-kk7fg" Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.028717 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-8xp7p" Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.029337 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlxcc\" (UniqueName: \"kubernetes.io/projected/ffed57be-9be1-478b-b43b-c6d67de8630c-kube-api-access-rlxcc\") pod \"aodh-3436-account-create-update-kk7fg\" (UID: \"ffed57be-9be1-478b-b43b-c6d67de8630c\") " pod="openstack/aodh-3436-account-create-update-kk7fg" Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.029825 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffed57be-9be1-478b-b43b-c6d67de8630c-operator-scripts\") pod \"aodh-3436-account-create-update-kk7fg\" (UID: \"ffed57be-9be1-478b-b43b-c6d67de8630c\") " pod="openstack/aodh-3436-account-create-update-kk7fg" Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.062227 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlxcc\" (UniqueName: \"kubernetes.io/projected/ffed57be-9be1-478b-b43b-c6d67de8630c-kube-api-access-rlxcc\") pod \"aodh-3436-account-create-update-kk7fg\" (UID: \"ffed57be-9be1-478b-b43b-c6d67de8630c\") " pod="openstack/aodh-3436-account-create-update-kk7fg" Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.109833 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-3436-account-create-update-kk7fg" Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.274664 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bp48t" Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.342128 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgtnh\" (UniqueName: \"kubernetes.io/projected/5e60f29c-ef3b-4733-a6d8-92cd74e10eac-kube-api-access-vgtnh\") pod \"5e60f29c-ef3b-4733-a6d8-92cd74e10eac\" (UID: \"5e60f29c-ef3b-4733-a6d8-92cd74e10eac\") " Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.379615 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e60f29c-ef3b-4733-a6d8-92cd74e10eac-kube-api-access-vgtnh" (OuterVolumeSpecName: "kube-api-access-vgtnh") pod "5e60f29c-ef3b-4733-a6d8-92cd74e10eac" (UID: "5e60f29c-ef3b-4733-a6d8-92cd74e10eac"). InnerVolumeSpecName "kube-api-access-vgtnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.452389 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e60f29c-ef3b-4733-a6d8-92cd74e10eac-combined-ca-bundle\") pod \"5e60f29c-ef3b-4733-a6d8-92cd74e10eac\" (UID: \"5e60f29c-ef3b-4733-a6d8-92cd74e10eac\") " Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.452651 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e60f29c-ef3b-4733-a6d8-92cd74e10eac-scripts\") pod \"5e60f29c-ef3b-4733-a6d8-92cd74e10eac\" (UID: \"5e60f29c-ef3b-4733-a6d8-92cd74e10eac\") " Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.452719 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e60f29c-ef3b-4733-a6d8-92cd74e10eac-config-data\") pod \"5e60f29c-ef3b-4733-a6d8-92cd74e10eac\" (UID: \"5e60f29c-ef3b-4733-a6d8-92cd74e10eac\") " Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.453607 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgtnh\" (UniqueName: \"kubernetes.io/projected/5e60f29c-ef3b-4733-a6d8-92cd74e10eac-kube-api-access-vgtnh\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.468469 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e60f29c-ef3b-4733-a6d8-92cd74e10eac-scripts" (OuterVolumeSpecName: "scripts") pod "5e60f29c-ef3b-4733-a6d8-92cd74e10eac" (UID: "5e60f29c-ef3b-4733-a6d8-92cd74e10eac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.493905 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e60f29c-ef3b-4733-a6d8-92cd74e10eac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e60f29c-ef3b-4733-a6d8-92cd74e10eac" (UID: "5e60f29c-ef3b-4733-a6d8-92cd74e10eac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.497799 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e60f29c-ef3b-4733-a6d8-92cd74e10eac-config-data" (OuterVolumeSpecName: "config-data") pod "5e60f29c-ef3b-4733-a6d8-92cd74e10eac" (UID: "5e60f29c-ef3b-4733-a6d8-92cd74e10eac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:22:54 crc kubenswrapper[4886]: W0219 21:22:54.549035 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe6627da_95f2_48b8_ba42_eae7018d98b5.slice/crio-e9f5d4e651ca900002ffdd37c07f0a880bb78dbb765116875c240f246192e5fd WatchSource:0}: Error finding container e9f5d4e651ca900002ffdd37c07f0a880bb78dbb765116875c240f246192e5fd: Status 404 returned error can't find the container with id e9f5d4e651ca900002ffdd37c07f0a880bb78dbb765116875c240f246192e5fd Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.550169 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-8xp7p"] Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.556461 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e60f29c-ef3b-4733-a6d8-92cd74e10eac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.556485 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e60f29c-ef3b-4733-a6d8-92cd74e10eac-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.556493 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e60f29c-ef3b-4733-a6d8-92cd74e10eac-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.675737 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-3436-account-create-update-kk7fg"] Feb 19 21:22:54 crc kubenswrapper[4886]: W0219 21:22:54.678169 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffed57be_9be1_478b_b43b_c6d67de8630c.slice/crio-1230e5ca26348c4db6ecf8b644da27f9888964983002f20f8191a290135a74fd WatchSource:0}: Error finding container 1230e5ca26348c4db6ecf8b644da27f9888964983002f20f8191a290135a74fd: Status 404 returned error can't find the container with id 1230e5ca26348c4db6ecf8b644da27f9888964983002f20f8191a290135a74fd Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.776634 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bp48t" event={"ID":"5e60f29c-ef3b-4733-a6d8-92cd74e10eac","Type":"ContainerDied","Data":"2d1740955ce0428711122416e087a512564c2bf18097784bfa1bcc1fc732ea87"} Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.776671 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d1740955ce0428711122416e087a512564c2bf18097784bfa1bcc1fc732ea87" Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.776687 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bp48t" Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.778943 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-8xp7p" event={"ID":"be6627da-95f2-48b8-ba42-eae7018d98b5","Type":"ContainerStarted","Data":"17b9c0e1a0dcac51923f8d0a5389084c0b49fce73aa5a2ec476b75ce30de53ff"} Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.778968 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-8xp7p" event={"ID":"be6627da-95f2-48b8-ba42-eae7018d98b5","Type":"ContainerStarted","Data":"e9f5d4e651ca900002ffdd37c07f0a880bb78dbb765116875c240f246192e5fd"} Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.782789 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-3436-account-create-update-kk7fg" event={"ID":"ffed57be-9be1-478b-b43b-c6d67de8630c","Type":"ContainerStarted","Data":"1230e5ca26348c4db6ecf8b644da27f9888964983002f20f8191a290135a74fd"} Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.805760 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-8xp7p" podStartSLOduration=1.8057372250000001 podStartE2EDuration="1.805737225s" podCreationTimestamp="2026-02-19 21:22:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:22:54.797933022 +0000 UTC m=+1405.425776112" watchObservedRunningTime="2026-02-19 21:22:54.805737225 +0000 UTC m=+1405.433580275" Feb 19 21:22:54 crc kubenswrapper[4886]: I0219 21:22:54.997170 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 21:22:55 crc kubenswrapper[4886]: E0219 21:22:55.009884 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e60f29c-ef3b-4733-a6d8-92cd74e10eac" containerName="nova-cell0-conductor-db-sync" Feb 19 21:22:55 crc kubenswrapper[4886]: I0219 21:22:55.009911 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e60f29c-ef3b-4733-a6d8-92cd74e10eac" containerName="nova-cell0-conductor-db-sync" Feb 19 21:22:55 crc kubenswrapper[4886]: I0219 21:22:55.010170 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e60f29c-ef3b-4733-a6d8-92cd74e10eac" containerName="nova-cell0-conductor-db-sync" Feb 19 21:22:55 crc kubenswrapper[4886]: I0219 21:22:55.010924 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 21:22:55 crc kubenswrapper[4886]: I0219 21:22:55.013723 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 21:22:55 crc kubenswrapper[4886]: I0219 21:22:55.013962 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rtmrp" Feb 19 21:22:55 crc kubenswrapper[4886]: I0219 21:22:55.028107 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 21:22:55 crc kubenswrapper[4886]: I0219 21:22:55.067136 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfa8712-6bfd-40b7-b5ed-282f257baf77-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7cfa8712-6bfd-40b7-b5ed-282f257baf77\") " pod="openstack/nova-cell0-conductor-0" Feb 19 21:22:55 crc kubenswrapper[4886]: I0219 21:22:55.067219 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cfa8712-6bfd-40b7-b5ed-282f257baf77-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7cfa8712-6bfd-40b7-b5ed-282f257baf77\") " pod="openstack/nova-cell0-conductor-0" Feb 19 21:22:55 crc kubenswrapper[4886]: I0219 21:22:55.067612 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s6k5\" (UniqueName: \"kubernetes.io/projected/7cfa8712-6bfd-40b7-b5ed-282f257baf77-kube-api-access-7s6k5\") pod \"nova-cell0-conductor-0\" (UID: \"7cfa8712-6bfd-40b7-b5ed-282f257baf77\") " pod="openstack/nova-cell0-conductor-0" Feb 19 21:22:55 crc kubenswrapper[4886]: I0219 21:22:55.171237 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s6k5\" (UniqueName: \"kubernetes.io/projected/7cfa8712-6bfd-40b7-b5ed-282f257baf77-kube-api-access-7s6k5\") pod \"nova-cell0-conductor-0\" (UID: \"7cfa8712-6bfd-40b7-b5ed-282f257baf77\") " pod="openstack/nova-cell0-conductor-0" Feb 19 21:22:55 crc kubenswrapper[4886]: I0219 21:22:55.171376 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfa8712-6bfd-40b7-b5ed-282f257baf77-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7cfa8712-6bfd-40b7-b5ed-282f257baf77\") " pod="openstack/nova-cell0-conductor-0" Feb 19 21:22:55 crc kubenswrapper[4886]: I0219 21:22:55.171407 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cfa8712-6bfd-40b7-b5ed-282f257baf77-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7cfa8712-6bfd-40b7-b5ed-282f257baf77\") " pod="openstack/nova-cell0-conductor-0" Feb 19 21:22:55 crc kubenswrapper[4886]: I0219 21:22:55.177747 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfa8712-6bfd-40b7-b5ed-282f257baf77-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7cfa8712-6bfd-40b7-b5ed-282f257baf77\") " pod="openstack/nova-cell0-conductor-0" Feb 19 21:22:55 crc kubenswrapper[4886]: I0219 21:22:55.177786 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cfa8712-6bfd-40b7-b5ed-282f257baf77-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7cfa8712-6bfd-40b7-b5ed-282f257baf77\") " pod="openstack/nova-cell0-conductor-0" Feb 19 21:22:55 crc kubenswrapper[4886]: I0219 21:22:55.190623 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s6k5\" (UniqueName: \"kubernetes.io/projected/7cfa8712-6bfd-40b7-b5ed-282f257baf77-kube-api-access-7s6k5\") pod \"nova-cell0-conductor-0\" (UID: \"7cfa8712-6bfd-40b7-b5ed-282f257baf77\") " pod="openstack/nova-cell0-conductor-0" Feb 19 21:22:55 crc kubenswrapper[4886]: I0219 21:22:55.359948 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 21:22:55 crc kubenswrapper[4886]: I0219 21:22:55.797860 4886 generic.go:334] "Generic (PLEG): container finished" podID="ffed57be-9be1-478b-b43b-c6d67de8630c" containerID="17256ff59e07d508420c8f2ea37f815b84e92bbe34dbd4a000b8ccdbe63cdfdd" exitCode=0 Feb 19 21:22:55 crc kubenswrapper[4886]: I0219 21:22:55.798068 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-3436-account-create-update-kk7fg" event={"ID":"ffed57be-9be1-478b-b43b-c6d67de8630c","Type":"ContainerDied","Data":"17256ff59e07d508420c8f2ea37f815b84e92bbe34dbd4a000b8ccdbe63cdfdd"} Feb 19 21:22:55 crc kubenswrapper[4886]: I0219 21:22:55.810879 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75c0d65b-3609-4dd4-a676-8a8baa757cfe","Type":"ContainerStarted","Data":"56fa01b3604d3aa7fa723490f1a74e740abe1d861bb872a32a6a51eca4e55dcf"} Feb 19 21:22:55 crc kubenswrapper[4886]: I0219 21:22:55.811435 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 21:22:55 crc kubenswrapper[4886]: I0219 21:22:55.813088 4886 generic.go:334] "Generic (PLEG): container finished" podID="be6627da-95f2-48b8-ba42-eae7018d98b5" containerID="17b9c0e1a0dcac51923f8d0a5389084c0b49fce73aa5a2ec476b75ce30de53ff" exitCode=0 Feb 19 21:22:55 crc kubenswrapper[4886]: I0219 21:22:55.813119 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-8xp7p" event={"ID":"be6627da-95f2-48b8-ba42-eae7018d98b5","Type":"ContainerDied","Data":"17b9c0e1a0dcac51923f8d0a5389084c0b49fce73aa5a2ec476b75ce30de53ff"} Feb 19 21:22:55 crc kubenswrapper[4886]: I0219 21:22:55.836046 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.122381936 podStartE2EDuration="5.836027541s" podCreationTimestamp="2026-02-19 21:22:50 +0000 UTC" firstStartedPulling="2026-02-19 21:22:51.613353514 +0000 UTC m=+1402.241196574" lastFinishedPulling="2026-02-19 21:22:55.326999129 +0000 UTC m=+1405.954842179" observedRunningTime="2026-02-19 21:22:55.833292164 +0000 UTC m=+1406.461135234" watchObservedRunningTime="2026-02-19 21:22:55.836027541 +0000 UTC m=+1406.463870591" Feb 19 21:22:55 crc kubenswrapper[4886]: I0219 21:22:55.913758 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 21:22:56 crc kubenswrapper[4886]: I0219 21:22:56.829702 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7cfa8712-6bfd-40b7-b5ed-282f257baf77","Type":"ContainerStarted","Data":"470f8f7a4416a7b513e34fa11e8edbbf84e9eeebf6c1feb34fa5be29cfc2cc2b"} Feb 19 21:22:56 crc kubenswrapper[4886]: I0219 21:22:56.833342 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7cfa8712-6bfd-40b7-b5ed-282f257baf77","Type":"ContainerStarted","Data":"c27927e0a2f5cbf708bcdc0decaad0894314ab14d9e05610bb6fed9eda15a6af"} Feb 19 21:22:56 crc kubenswrapper[4886]: I0219 21:22:56.833374 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 19 21:22:56 crc kubenswrapper[4886]: I0219 21:22:56.858502 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.858479434 podStartE2EDuration="2.858479434s" podCreationTimestamp="2026-02-19 21:22:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:22:56.85669921 +0000 UTC m=+1407.484542260" watchObservedRunningTime="2026-02-19 21:22:56.858479434 +0000 UTC m=+1407.486322494" Feb 19 21:22:57 crc kubenswrapper[4886]: I0219 21:22:57.429731 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-3436-account-create-update-kk7fg" Feb 19 21:22:57 crc kubenswrapper[4886]: I0219 21:22:57.436466 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-8xp7p" Feb 19 21:22:57 crc kubenswrapper[4886]: I0219 21:22:57.525389 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwnh5\" (UniqueName: \"kubernetes.io/projected/be6627da-95f2-48b8-ba42-eae7018d98b5-kube-api-access-bwnh5\") pod \"be6627da-95f2-48b8-ba42-eae7018d98b5\" (UID: \"be6627da-95f2-48b8-ba42-eae7018d98b5\") " Feb 19 21:22:57 crc kubenswrapper[4886]: I0219 21:22:57.525486 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffed57be-9be1-478b-b43b-c6d67de8630c-operator-scripts\") pod \"ffed57be-9be1-478b-b43b-c6d67de8630c\" (UID: \"ffed57be-9be1-478b-b43b-c6d67de8630c\") " Feb 19 21:22:57 crc kubenswrapper[4886]: I0219 21:22:57.525752 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlxcc\" (UniqueName: \"kubernetes.io/projected/ffed57be-9be1-478b-b43b-c6d67de8630c-kube-api-access-rlxcc\") pod \"ffed57be-9be1-478b-b43b-c6d67de8630c\" (UID: \"ffed57be-9be1-478b-b43b-c6d67de8630c\") " Feb 19 21:22:57 crc kubenswrapper[4886]: I0219 21:22:57.526008 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be6627da-95f2-48b8-ba42-eae7018d98b5-operator-scripts\") pod \"be6627da-95f2-48b8-ba42-eae7018d98b5\" (UID: \"be6627da-95f2-48b8-ba42-eae7018d98b5\") " Feb 19 21:22:57 crc kubenswrapper[4886]: I0219 21:22:57.527999 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffed57be-9be1-478b-b43b-c6d67de8630c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ffed57be-9be1-478b-b43b-c6d67de8630c" (UID: "ffed57be-9be1-478b-b43b-c6d67de8630c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:22:57 crc kubenswrapper[4886]: I0219 21:22:57.528027 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be6627da-95f2-48b8-ba42-eae7018d98b5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be6627da-95f2-48b8-ba42-eae7018d98b5" (UID: "be6627da-95f2-48b8-ba42-eae7018d98b5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:22:57 crc kubenswrapper[4886]: I0219 21:22:57.550288 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffed57be-9be1-478b-b43b-c6d67de8630c-kube-api-access-rlxcc" (OuterVolumeSpecName: "kube-api-access-rlxcc") pod "ffed57be-9be1-478b-b43b-c6d67de8630c" (UID: "ffed57be-9be1-478b-b43b-c6d67de8630c"). InnerVolumeSpecName "kube-api-access-rlxcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:22:57 crc kubenswrapper[4886]: I0219 21:22:57.551577 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be6627da-95f2-48b8-ba42-eae7018d98b5-kube-api-access-bwnh5" (OuterVolumeSpecName: "kube-api-access-bwnh5") pod "be6627da-95f2-48b8-ba42-eae7018d98b5" (UID: "be6627da-95f2-48b8-ba42-eae7018d98b5"). InnerVolumeSpecName "kube-api-access-bwnh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:22:57 crc kubenswrapper[4886]: I0219 21:22:57.629349 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be6627da-95f2-48b8-ba42-eae7018d98b5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:57 crc kubenswrapper[4886]: I0219 21:22:57.629391 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwnh5\" (UniqueName: \"kubernetes.io/projected/be6627da-95f2-48b8-ba42-eae7018d98b5-kube-api-access-bwnh5\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:57 crc kubenswrapper[4886]: I0219 21:22:57.629593 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffed57be-9be1-478b-b43b-c6d67de8630c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:57 crc kubenswrapper[4886]: I0219 21:22:57.629607 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlxcc\" (UniqueName: \"kubernetes.io/projected/ffed57be-9be1-478b-b43b-c6d67de8630c-kube-api-access-rlxcc\") on node \"crc\" DevicePath \"\"" Feb 19 21:22:57 crc kubenswrapper[4886]: I0219 21:22:57.845489 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-8xp7p" event={"ID":"be6627da-95f2-48b8-ba42-eae7018d98b5","Type":"ContainerDied","Data":"e9f5d4e651ca900002ffdd37c07f0a880bb78dbb765116875c240f246192e5fd"} Feb 19 21:22:57 crc kubenswrapper[4886]: I0219 21:22:57.845536 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9f5d4e651ca900002ffdd37c07f0a880bb78dbb765116875c240f246192e5fd" Feb 19 21:22:57 crc kubenswrapper[4886]: I0219 21:22:57.845647 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-8xp7p" Feb 19 21:22:57 crc kubenswrapper[4886]: I0219 21:22:57.847813 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-3436-account-create-update-kk7fg" event={"ID":"ffed57be-9be1-478b-b43b-c6d67de8630c","Type":"ContainerDied","Data":"1230e5ca26348c4db6ecf8b644da27f9888964983002f20f8191a290135a74fd"} Feb 19 21:22:57 crc kubenswrapper[4886]: I0219 21:22:57.847858 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-3436-account-create-update-kk7fg" Feb 19 21:22:57 crc kubenswrapper[4886]: I0219 21:22:57.847876 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1230e5ca26348c4db6ecf8b644da27f9888964983002f20f8191a290135a74fd" Feb 19 21:22:59 crc kubenswrapper[4886]: I0219 21:22:59.077498 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-cqfbx"] Feb 19 21:22:59 crc kubenswrapper[4886]: E0219 21:22:59.077960 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be6627da-95f2-48b8-ba42-eae7018d98b5" containerName="mariadb-database-create" Feb 19 21:22:59 crc kubenswrapper[4886]: I0219 21:22:59.077972 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="be6627da-95f2-48b8-ba42-eae7018d98b5" containerName="mariadb-database-create" Feb 19 21:22:59 crc kubenswrapper[4886]: E0219 21:22:59.077986 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffed57be-9be1-478b-b43b-c6d67de8630c" containerName="mariadb-account-create-update" Feb 19 21:22:59 crc kubenswrapper[4886]: I0219 21:22:59.077992 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffed57be-9be1-478b-b43b-c6d67de8630c" containerName="mariadb-account-create-update" Feb 19 21:22:59 crc kubenswrapper[4886]: I0219 21:22:59.078227 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffed57be-9be1-478b-b43b-c6d67de8630c" containerName="mariadb-account-create-update" Feb 19 21:22:59 crc kubenswrapper[4886]: I0219 21:22:59.078248 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="be6627da-95f2-48b8-ba42-eae7018d98b5" containerName="mariadb-database-create" Feb 19 21:22:59 crc kubenswrapper[4886]: I0219 21:22:59.079000 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-cqfbx" Feb 19 21:22:59 crc kubenswrapper[4886]: I0219 21:22:59.081921 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-85ctw" Feb 19 21:22:59 crc kubenswrapper[4886]: I0219 21:22:59.082934 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 21:22:59 crc kubenswrapper[4886]: I0219 21:22:59.083163 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 19 21:22:59 crc kubenswrapper[4886]: I0219 21:22:59.087057 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aec76ed-0cd6-4468-85a4-23c32dcc5708-scripts\") pod \"aodh-db-sync-cqfbx\" (UID: \"0aec76ed-0cd6-4468-85a4-23c32dcc5708\") " pod="openstack/aodh-db-sync-cqfbx" Feb 19 21:22:59 crc kubenswrapper[4886]: I0219 21:22:59.087103 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdmcr\" (UniqueName: \"kubernetes.io/projected/0aec76ed-0cd6-4468-85a4-23c32dcc5708-kube-api-access-wdmcr\") pod \"aodh-db-sync-cqfbx\" (UID: \"0aec76ed-0cd6-4468-85a4-23c32dcc5708\") " pod="openstack/aodh-db-sync-cqfbx" Feb 19 21:22:59 crc kubenswrapper[4886]: I0219 21:22:59.087134 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aec76ed-0cd6-4468-85a4-23c32dcc5708-config-data\") pod \"aodh-db-sync-cqfbx\" (UID: \"0aec76ed-0cd6-4468-85a4-23c32dcc5708\") " pod="openstack/aodh-db-sync-cqfbx" Feb 19 21:22:59 crc kubenswrapper[4886]: I0219 21:22:59.087174 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aec76ed-0cd6-4468-85a4-23c32dcc5708-combined-ca-bundle\") pod \"aodh-db-sync-cqfbx\" (UID: \"0aec76ed-0cd6-4468-85a4-23c32dcc5708\") " pod="openstack/aodh-db-sync-cqfbx" Feb 19 21:22:59 crc kubenswrapper[4886]: I0219 21:22:59.087514 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 19 21:22:59 crc kubenswrapper[4886]: I0219 21:22:59.092693 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-cqfbx"] Feb 19 21:22:59 crc kubenswrapper[4886]: I0219 21:22:59.189828 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aec76ed-0cd6-4468-85a4-23c32dcc5708-scripts\") pod \"aodh-db-sync-cqfbx\" (UID: \"0aec76ed-0cd6-4468-85a4-23c32dcc5708\") " pod="openstack/aodh-db-sync-cqfbx" Feb 19 21:22:59 crc kubenswrapper[4886]: I0219 21:22:59.189879 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdmcr\" (UniqueName: \"kubernetes.io/projected/0aec76ed-0cd6-4468-85a4-23c32dcc5708-kube-api-access-wdmcr\") pod \"aodh-db-sync-cqfbx\" (UID: \"0aec76ed-0cd6-4468-85a4-23c32dcc5708\") " pod="openstack/aodh-db-sync-cqfbx" Feb 19 21:22:59 crc kubenswrapper[4886]: I0219 21:22:59.189911 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aec76ed-0cd6-4468-85a4-23c32dcc5708-config-data\") pod \"aodh-db-sync-cqfbx\" (UID: \"0aec76ed-0cd6-4468-85a4-23c32dcc5708\") " pod="openstack/aodh-db-sync-cqfbx" Feb 19 21:22:59 crc kubenswrapper[4886]: I0219 21:22:59.189943 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aec76ed-0cd6-4468-85a4-23c32dcc5708-combined-ca-bundle\") pod \"aodh-db-sync-cqfbx\" (UID: \"0aec76ed-0cd6-4468-85a4-23c32dcc5708\") " pod="openstack/aodh-db-sync-cqfbx" Feb 19 21:22:59 crc kubenswrapper[4886]: I0219 21:22:59.197102 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aec76ed-0cd6-4468-85a4-23c32dcc5708-scripts\") pod \"aodh-db-sync-cqfbx\" (UID: \"0aec76ed-0cd6-4468-85a4-23c32dcc5708\") " pod="openstack/aodh-db-sync-cqfbx" Feb 19 21:22:59 crc kubenswrapper[4886]: I0219 21:22:59.197235 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aec76ed-0cd6-4468-85a4-23c32dcc5708-config-data\") pod \"aodh-db-sync-cqfbx\" (UID: \"0aec76ed-0cd6-4468-85a4-23c32dcc5708\") " pod="openstack/aodh-db-sync-cqfbx" Feb 19 21:22:59 crc kubenswrapper[4886]: I0219 21:22:59.204668 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aec76ed-0cd6-4468-85a4-23c32dcc5708-combined-ca-bundle\") pod \"aodh-db-sync-cqfbx\" (UID: \"0aec76ed-0cd6-4468-85a4-23c32dcc5708\") " pod="openstack/aodh-db-sync-cqfbx" Feb 19 21:22:59 crc kubenswrapper[4886]: I0219 21:22:59.209242 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdmcr\" (UniqueName: \"kubernetes.io/projected/0aec76ed-0cd6-4468-85a4-23c32dcc5708-kube-api-access-wdmcr\") pod \"aodh-db-sync-cqfbx\" (UID: \"0aec76ed-0cd6-4468-85a4-23c32dcc5708\") " pod="openstack/aodh-db-sync-cqfbx" Feb 19 21:22:59 crc kubenswrapper[4886]: I0219 21:22:59.400918 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-cqfbx" Feb 19 21:23:00 crc kubenswrapper[4886]: I0219 21:23:00.098059 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-cqfbx"] Feb 19 21:23:00 crc kubenswrapper[4886]: I0219 21:23:00.922011 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-cqfbx" event={"ID":"0aec76ed-0cd6-4468-85a4-23c32dcc5708","Type":"ContainerStarted","Data":"6a8cd6510da88e6a3aad301ddf7c5334ca928acb27863abce3bb953244098c2a"} Feb 19 21:23:05 crc kubenswrapper[4886]: I0219 21:23:05.409644 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.070405 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-2dzvs"] Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.072276 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2dzvs" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.076074 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.076257 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.100028 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2dzvs"] Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.199418 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f6d74fb-e91d-4838-ab58-90aa48b0bc8c-scripts\") pod \"nova-cell0-cell-mapping-2dzvs\" (UID: \"2f6d74fb-e91d-4838-ab58-90aa48b0bc8c\") " pod="openstack/nova-cell0-cell-mapping-2dzvs" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.199665 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98l2p\" (UniqueName: \"kubernetes.io/projected/2f6d74fb-e91d-4838-ab58-90aa48b0bc8c-kube-api-access-98l2p\") pod \"nova-cell0-cell-mapping-2dzvs\" (UID: \"2f6d74fb-e91d-4838-ab58-90aa48b0bc8c\") " pod="openstack/nova-cell0-cell-mapping-2dzvs" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.199737 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f6d74fb-e91d-4838-ab58-90aa48b0bc8c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2dzvs\" (UID: \"2f6d74fb-e91d-4838-ab58-90aa48b0bc8c\") " pod="openstack/nova-cell0-cell-mapping-2dzvs" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.199754 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f6d74fb-e91d-4838-ab58-90aa48b0bc8c-config-data\") pod \"nova-cell0-cell-mapping-2dzvs\" (UID: \"2f6d74fb-e91d-4838-ab58-90aa48b0bc8c\") " pod="openstack/nova-cell0-cell-mapping-2dzvs" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.269142 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.271152 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.275255 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.295384 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.297034 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.300201 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.301693 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f6d74fb-e91d-4838-ab58-90aa48b0bc8c-scripts\") pod \"nova-cell0-cell-mapping-2dzvs\" (UID: \"2f6d74fb-e91d-4838-ab58-90aa48b0bc8c\") " pod="openstack/nova-cell0-cell-mapping-2dzvs" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.301742 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98l2p\" (UniqueName: \"kubernetes.io/projected/2f6d74fb-e91d-4838-ab58-90aa48b0bc8c-kube-api-access-98l2p\") pod \"nova-cell0-cell-mapping-2dzvs\" (UID: \"2f6d74fb-e91d-4838-ab58-90aa48b0bc8c\") " pod="openstack/nova-cell0-cell-mapping-2dzvs" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.301816 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f6d74fb-e91d-4838-ab58-90aa48b0bc8c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2dzvs\" (UID: \"2f6d74fb-e91d-4838-ab58-90aa48b0bc8c\") " pod="openstack/nova-cell0-cell-mapping-2dzvs" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.301840 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f6d74fb-e91d-4838-ab58-90aa48b0bc8c-config-data\") pod \"nova-cell0-cell-mapping-2dzvs\" (UID: \"2f6d74fb-e91d-4838-ab58-90aa48b0bc8c\") " pod="openstack/nova-cell0-cell-mapping-2dzvs" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.314357 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f6d74fb-e91d-4838-ab58-90aa48b0bc8c-config-data\") pod \"nova-cell0-cell-mapping-2dzvs\" (UID: \"2f6d74fb-e91d-4838-ab58-90aa48b0bc8c\") " pod="openstack/nova-cell0-cell-mapping-2dzvs" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.328817 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f6d74fb-e91d-4838-ab58-90aa48b0bc8c-scripts\") pod \"nova-cell0-cell-mapping-2dzvs\" (UID: \"2f6d74fb-e91d-4838-ab58-90aa48b0bc8c\") " pod="openstack/nova-cell0-cell-mapping-2dzvs" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.329256 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f6d74fb-e91d-4838-ab58-90aa48b0bc8c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2dzvs\" (UID: \"2f6d74fb-e91d-4838-ab58-90aa48b0bc8c\") " pod="openstack/nova-cell0-cell-mapping-2dzvs" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.329860 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.353501 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.381921 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98l2p\" (UniqueName: \"kubernetes.io/projected/2f6d74fb-e91d-4838-ab58-90aa48b0bc8c-kube-api-access-98l2p\") pod \"nova-cell0-cell-mapping-2dzvs\" (UID: \"2f6d74fb-e91d-4838-ab58-90aa48b0bc8c\") " pod="openstack/nova-cell0-cell-mapping-2dzvs" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.403775 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4htvw\" (UniqueName: \"kubernetes.io/projected/1e308384-cf87-4aaa-8d73-f35645e20d34-kube-api-access-4htvw\") pod \"nova-cell1-novncproxy-0\" (UID: \"1e308384-cf87-4aaa-8d73-f35645e20d34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.403855 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/868bd62d-ce89-48a0-a497-de647891f712-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"868bd62d-ce89-48a0-a497-de647891f712\") " pod="openstack/nova-metadata-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.403887 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k9wg\" (UniqueName: \"kubernetes.io/projected/868bd62d-ce89-48a0-a497-de647891f712-kube-api-access-4k9wg\") pod \"nova-metadata-0\" (UID: \"868bd62d-ce89-48a0-a497-de647891f712\") " pod="openstack/nova-metadata-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.403909 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e308384-cf87-4aaa-8d73-f35645e20d34-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1e308384-cf87-4aaa-8d73-f35645e20d34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.403977 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e308384-cf87-4aaa-8d73-f35645e20d34-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1e308384-cf87-4aaa-8d73-f35645e20d34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.404007 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/868bd62d-ce89-48a0-a497-de647891f712-config-data\") pod \"nova-metadata-0\" (UID: \"868bd62d-ce89-48a0-a497-de647891f712\") " pod="openstack/nova-metadata-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.404088 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/868bd62d-ce89-48a0-a497-de647891f712-logs\") pod \"nova-metadata-0\" (UID: \"868bd62d-ce89-48a0-a497-de647891f712\") " pod="openstack/nova-metadata-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.436341 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.437989 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.460625 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2dzvs" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.461758 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.506543 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.506974 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/868bd62d-ce89-48a0-a497-de647891f712-logs\") pod \"nova-metadata-0\" (UID: \"868bd62d-ce89-48a0-a497-de647891f712\") " pod="openstack/nova-metadata-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.507113 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4htvw\" (UniqueName: \"kubernetes.io/projected/1e308384-cf87-4aaa-8d73-f35645e20d34-kube-api-access-4htvw\") pod \"nova-cell1-novncproxy-0\" (UID: \"1e308384-cf87-4aaa-8d73-f35645e20d34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.507192 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/868bd62d-ce89-48a0-a497-de647891f712-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"868bd62d-ce89-48a0-a497-de647891f712\") " pod="openstack/nova-metadata-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.507237 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k9wg\" (UniqueName: \"kubernetes.io/projected/868bd62d-ce89-48a0-a497-de647891f712-kube-api-access-4k9wg\") pod \"nova-metadata-0\" (UID: \"868bd62d-ce89-48a0-a497-de647891f712\") " pod="openstack/nova-metadata-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.507279 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e308384-cf87-4aaa-8d73-f35645e20d34-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1e308384-cf87-4aaa-8d73-f35645e20d34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.507311 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e308384-cf87-4aaa-8d73-f35645e20d34-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1e308384-cf87-4aaa-8d73-f35645e20d34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.507351 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/868bd62d-ce89-48a0-a497-de647891f712-config-data\") pod \"nova-metadata-0\" (UID: \"868bd62d-ce89-48a0-a497-de647891f712\") " pod="openstack/nova-metadata-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.507698 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/868bd62d-ce89-48a0-a497-de647891f712-logs\") pod \"nova-metadata-0\" (UID: \"868bd62d-ce89-48a0-a497-de647891f712\") " pod="openstack/nova-metadata-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.526563 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/868bd62d-ce89-48a0-a497-de647891f712-config-data\") pod \"nova-metadata-0\" (UID: \"868bd62d-ce89-48a0-a497-de647891f712\") " pod="openstack/nova-metadata-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.530033 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e308384-cf87-4aaa-8d73-f35645e20d34-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1e308384-cf87-4aaa-8d73-f35645e20d34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.531939 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e308384-cf87-4aaa-8d73-f35645e20d34-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1e308384-cf87-4aaa-8d73-f35645e20d34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.531940 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/868bd62d-ce89-48a0-a497-de647891f712-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"868bd62d-ce89-48a0-a497-de647891f712\") " pod="openstack/nova-metadata-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.555481 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k9wg\" (UniqueName: \"kubernetes.io/projected/868bd62d-ce89-48a0-a497-de647891f712-kube-api-access-4k9wg\") pod \"nova-metadata-0\" (UID: \"868bd62d-ce89-48a0-a497-de647891f712\") " pod="openstack/nova-metadata-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.556497 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4htvw\" (UniqueName: \"kubernetes.io/projected/1e308384-cf87-4aaa-8d73-f35645e20d34-kube-api-access-4htvw\") pod \"nova-cell1-novncproxy-0\" (UID: \"1e308384-cf87-4aaa-8d73-f35645e20d34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.560783 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-8mbkb"] Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.571814 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.579003 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-8mbkb"] Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.597584 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.628754 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pq69\" (UniqueName: \"kubernetes.io/projected/f270bcdf-715a-4ceb-8234-bcdca0aee4ec-kube-api-access-9pq69\") pod \"nova-scheduler-0\" (UID: \"f270bcdf-715a-4ceb-8234-bcdca0aee4ec\") " pod="openstack/nova-scheduler-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.628856 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f270bcdf-715a-4ceb-8234-bcdca0aee4ec-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f270bcdf-715a-4ceb-8234-bcdca0aee4ec\") " pod="openstack/nova-scheduler-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.628941 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f270bcdf-715a-4ceb-8234-bcdca0aee4ec-config-data\") pod \"nova-scheduler-0\" (UID: \"f270bcdf-715a-4ceb-8234-bcdca0aee4ec\") " pod="openstack/nova-scheduler-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.652881 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.655347 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.658527 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.681978 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.731478 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pq69\" (UniqueName: \"kubernetes.io/projected/f270bcdf-715a-4ceb-8234-bcdca0aee4ec-kube-api-access-9pq69\") pod \"nova-scheduler-0\" (UID: \"f270bcdf-715a-4ceb-8234-bcdca0aee4ec\") " pod="openstack/nova-scheduler-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.731517 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cphvw\" (UniqueName: \"kubernetes.io/projected/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-kube-api-access-cphvw\") pod \"dnsmasq-dns-9b86998b5-8mbkb\" (UID: \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\") " pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.731568 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f270bcdf-715a-4ceb-8234-bcdca0aee4ec-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f270bcdf-715a-4ceb-8234-bcdca0aee4ec\") " pod="openstack/nova-scheduler-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.731584 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-8mbkb\" (UID: \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\") " pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.731597 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-8mbkb\" (UID: \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\") " pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.731634 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-config\") pod \"dnsmasq-dns-9b86998b5-8mbkb\" (UID: \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\") " pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.731661 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f270bcdf-715a-4ceb-8234-bcdca0aee4ec-config-data\") pod \"nova-scheduler-0\" (UID: \"f270bcdf-715a-4ceb-8234-bcdca0aee4ec\") " pod="openstack/nova-scheduler-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.731716 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-8mbkb\" (UID: \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\") " pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.731742 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-dns-svc\") pod \"dnsmasq-dns-9b86998b5-8mbkb\" (UID: \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\") " pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.748017 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f270bcdf-715a-4ceb-8234-bcdca0aee4ec-config-data\") pod \"nova-scheduler-0\" (UID: \"f270bcdf-715a-4ceb-8234-bcdca0aee4ec\") " pod="openstack/nova-scheduler-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.749967 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f270bcdf-715a-4ceb-8234-bcdca0aee4ec-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f270bcdf-715a-4ceb-8234-bcdca0aee4ec\") " pod="openstack/nova-scheduler-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.786158 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pq69\" (UniqueName: \"kubernetes.io/projected/f270bcdf-715a-4ceb-8234-bcdca0aee4ec-kube-api-access-9pq69\") pod \"nova-scheduler-0\" (UID: \"f270bcdf-715a-4ceb-8234-bcdca0aee4ec\") " pod="openstack/nova-scheduler-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.834294 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68bf5140-9213-4b4b-a7e1-ec92c4145c66-logs\") pod \"nova-api-0\" (UID: \"68bf5140-9213-4b4b-a7e1-ec92c4145c66\") " pod="openstack/nova-api-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.834384 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-8mbkb\" (UID: \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\") " pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.834447 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-dns-svc\") pod \"dnsmasq-dns-9b86998b5-8mbkb\" (UID: \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\") " pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.834628 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68bf5140-9213-4b4b-a7e1-ec92c4145c66-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"68bf5140-9213-4b4b-a7e1-ec92c4145c66\") " pod="openstack/nova-api-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.834704 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cphvw\" (UniqueName: \"kubernetes.io/projected/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-kube-api-access-cphvw\") pod \"dnsmasq-dns-9b86998b5-8mbkb\" (UID: \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\") " pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.834851 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqdvh\" (UniqueName: \"kubernetes.io/projected/68bf5140-9213-4b4b-a7e1-ec92c4145c66-kube-api-access-cqdvh\") pod \"nova-api-0\" (UID: \"68bf5140-9213-4b4b-a7e1-ec92c4145c66\") " pod="openstack/nova-api-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.834886 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-8mbkb\" (UID: \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\") " pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.834925 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-8mbkb\" (UID: \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\") " pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.834952 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68bf5140-9213-4b4b-a7e1-ec92c4145c66-config-data\") pod \"nova-api-0\" (UID: \"68bf5140-9213-4b4b-a7e1-ec92c4145c66\") " pod="openstack/nova-api-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.835229 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-config\") pod \"dnsmasq-dns-9b86998b5-8mbkb\" (UID: \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\") " pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.850001 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-config\") pod \"dnsmasq-dns-9b86998b5-8mbkb\" (UID: \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\") " pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.851559 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-8mbkb\" (UID: \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\") " pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.853620 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-8mbkb\" (UID: \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\") " pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.853884 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-8mbkb\" (UID: \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\") " pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.856194 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.856723 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-dns-svc\") pod \"dnsmasq-dns-9b86998b5-8mbkb\" (UID: \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\") " pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.864089 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cphvw\" (UniqueName: \"kubernetes.io/projected/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-kube-api-access-cphvw\") pod \"dnsmasq-dns-9b86998b5-8mbkb\" (UID: \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\") " pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.909790 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.937334 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68bf5140-9213-4b4b-a7e1-ec92c4145c66-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"68bf5140-9213-4b4b-a7e1-ec92c4145c66\") " pod="openstack/nova-api-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.937459 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqdvh\" (UniqueName: \"kubernetes.io/projected/68bf5140-9213-4b4b-a7e1-ec92c4145c66-kube-api-access-cqdvh\") pod \"nova-api-0\" (UID: \"68bf5140-9213-4b4b-a7e1-ec92c4145c66\") " pod="openstack/nova-api-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.937507 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68bf5140-9213-4b4b-a7e1-ec92c4145c66-config-data\") pod \"nova-api-0\" (UID: \"68bf5140-9213-4b4b-a7e1-ec92c4145c66\") " pod="openstack/nova-api-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.937569 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68bf5140-9213-4b4b-a7e1-ec92c4145c66-logs\") pod \"nova-api-0\" (UID: \"68bf5140-9213-4b4b-a7e1-ec92c4145c66\") " pod="openstack/nova-api-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.938240 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68bf5140-9213-4b4b-a7e1-ec92c4145c66-logs\") pod \"nova-api-0\" (UID: \"68bf5140-9213-4b4b-a7e1-ec92c4145c66\") " pod="openstack/nova-api-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.942826 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68bf5140-9213-4b4b-a7e1-ec92c4145c66-config-data\") pod \"nova-api-0\" (UID: \"68bf5140-9213-4b4b-a7e1-ec92c4145c66\") " pod="openstack/nova-api-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.952308 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.952833 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68bf5140-9213-4b4b-a7e1-ec92c4145c66-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"68bf5140-9213-4b4b-a7e1-ec92c4145c66\") " pod="openstack/nova-api-0" Feb 19 21:23:06 crc kubenswrapper[4886]: I0219 21:23:06.958888 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqdvh\" (UniqueName: \"kubernetes.io/projected/68bf5140-9213-4b4b-a7e1-ec92c4145c66-kube-api-access-cqdvh\") pod \"nova-api-0\" (UID: \"68bf5140-9213-4b4b-a7e1-ec92c4145c66\") " pod="openstack/nova-api-0" Feb 19 21:23:07 crc kubenswrapper[4886]: I0219 21:23:07.017379 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 21:23:07 crc kubenswrapper[4886]: I0219 21:23:07.047858 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-cqfbx" event={"ID":"0aec76ed-0cd6-4468-85a4-23c32dcc5708","Type":"ContainerStarted","Data":"e3376baf903f20790cba4588626b53cdaa11fd73a3818992364894e2cff04c0e"} Feb 19 21:23:07 crc kubenswrapper[4886]: I0219 21:23:07.081290 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-cqfbx" podStartSLOduration=2.265699211 podStartE2EDuration="8.081272546s" podCreationTimestamp="2026-02-19 21:22:59 +0000 UTC" firstStartedPulling="2026-02-19 21:23:00.100187891 +0000 UTC m=+1410.728030941" lastFinishedPulling="2026-02-19 21:23:05.915761206 +0000 UTC m=+1416.543604276" observedRunningTime="2026-02-19 21:23:07.071839644 +0000 UTC m=+1417.699682694" watchObservedRunningTime="2026-02-19 21:23:07.081272546 +0000 UTC m=+1417.709115596" Feb 19 21:23:07 crc kubenswrapper[4886]: I0219 21:23:07.185729 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2dzvs"] Feb 19 21:23:07 crc kubenswrapper[4886]: I0219 21:23:07.444807 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9c97q"] Feb 19 21:23:07 crc kubenswrapper[4886]: I0219 21:23:07.447086 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9c97q" Feb 19 21:23:07 crc kubenswrapper[4886]: I0219 21:23:07.491770 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 19 21:23:07 crc kubenswrapper[4886]: I0219 21:23:07.492525 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 21:23:07 crc kubenswrapper[4886]: I0219 21:23:07.503321 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9c97q"] Feb 19 21:23:07 crc kubenswrapper[4886]: I0219 21:23:07.554913 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd3f102-b16f-4118-a32e-64eda5ae8047-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9c97q\" (UID: \"9dd3f102-b16f-4118-a32e-64eda5ae8047\") " pod="openstack/nova-cell1-conductor-db-sync-9c97q" Feb 19 21:23:07 crc kubenswrapper[4886]: I0219 21:23:07.555009 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2nlc\" (UniqueName: \"kubernetes.io/projected/9dd3f102-b16f-4118-a32e-64eda5ae8047-kube-api-access-j2nlc\") pod \"nova-cell1-conductor-db-sync-9c97q\" (UID: \"9dd3f102-b16f-4118-a32e-64eda5ae8047\") " pod="openstack/nova-cell1-conductor-db-sync-9c97q" Feb 19 21:23:07 crc kubenswrapper[4886]: I0219 21:23:07.555068 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dd3f102-b16f-4118-a32e-64eda5ae8047-config-data\") pod \"nova-cell1-conductor-db-sync-9c97q\" (UID: \"9dd3f102-b16f-4118-a32e-64eda5ae8047\") " pod="openstack/nova-cell1-conductor-db-sync-9c97q" Feb 19 21:23:07 crc kubenswrapper[4886]: I0219 21:23:07.555117 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9dd3f102-b16f-4118-a32e-64eda5ae8047-scripts\") pod \"nova-cell1-conductor-db-sync-9c97q\" (UID: \"9dd3f102-b16f-4118-a32e-64eda5ae8047\") " pod="openstack/nova-cell1-conductor-db-sync-9c97q" Feb 19 21:23:07 crc kubenswrapper[4886]: I0219 21:23:07.657294 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2nlc\" (UniqueName: \"kubernetes.io/projected/9dd3f102-b16f-4118-a32e-64eda5ae8047-kube-api-access-j2nlc\") pod \"nova-cell1-conductor-db-sync-9c97q\" (UID: \"9dd3f102-b16f-4118-a32e-64eda5ae8047\") " pod="openstack/nova-cell1-conductor-db-sync-9c97q" Feb 19 21:23:07 crc kubenswrapper[4886]: I0219 21:23:07.657371 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dd3f102-b16f-4118-a32e-64eda5ae8047-config-data\") pod \"nova-cell1-conductor-db-sync-9c97q\" (UID: \"9dd3f102-b16f-4118-a32e-64eda5ae8047\") " pod="openstack/nova-cell1-conductor-db-sync-9c97q" Feb 19 21:23:07 crc kubenswrapper[4886]: I0219 21:23:07.657402 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9dd3f102-b16f-4118-a32e-64eda5ae8047-scripts\") pod \"nova-cell1-conductor-db-sync-9c97q\" (UID: \"9dd3f102-b16f-4118-a32e-64eda5ae8047\") " pod="openstack/nova-cell1-conductor-db-sync-9c97q" Feb 19 21:23:07 crc kubenswrapper[4886]: I0219 21:23:07.657522 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd3f102-b16f-4118-a32e-64eda5ae8047-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9c97q\" (UID: \"9dd3f102-b16f-4118-a32e-64eda5ae8047\") " pod="openstack/nova-cell1-conductor-db-sync-9c97q" Feb 19 21:23:07 crc kubenswrapper[4886]: I0219 21:23:07.661926 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9dd3f102-b16f-4118-a32e-64eda5ae8047-scripts\") pod \"nova-cell1-conductor-db-sync-9c97q\" (UID: \"9dd3f102-b16f-4118-a32e-64eda5ae8047\") " pod="openstack/nova-cell1-conductor-db-sync-9c97q" Feb 19 21:23:07 crc kubenswrapper[4886]: I0219 21:23:07.662206 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd3f102-b16f-4118-a32e-64eda5ae8047-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9c97q\" (UID: \"9dd3f102-b16f-4118-a32e-64eda5ae8047\") " pod="openstack/nova-cell1-conductor-db-sync-9c97q" Feb 19 21:23:07 crc kubenswrapper[4886]: I0219 21:23:07.665898 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dd3f102-b16f-4118-a32e-64eda5ae8047-config-data\") pod \"nova-cell1-conductor-db-sync-9c97q\" (UID: \"9dd3f102-b16f-4118-a32e-64eda5ae8047\") " pod="openstack/nova-cell1-conductor-db-sync-9c97q" Feb 19 21:23:07 crc kubenswrapper[4886]: I0219 21:23:07.675911 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2nlc\" (UniqueName: \"kubernetes.io/projected/9dd3f102-b16f-4118-a32e-64eda5ae8047-kube-api-access-j2nlc\") pod \"nova-cell1-conductor-db-sync-9c97q\" (UID: \"9dd3f102-b16f-4118-a32e-64eda5ae8047\") " pod="openstack/nova-cell1-conductor-db-sync-9c97q" Feb 19 21:23:07 crc kubenswrapper[4886]: I0219 21:23:07.807562 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9c97q" Feb 19 21:23:08 crc kubenswrapper[4886]: I0219 21:23:08.072716 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2dzvs" event={"ID":"2f6d74fb-e91d-4838-ab58-90aa48b0bc8c","Type":"ContainerStarted","Data":"f675f9558378fc11064443b53f590805e586c832a9a331c6160a1b849d235cfe"} Feb 19 21:23:08 crc kubenswrapper[4886]: I0219 21:23:08.073113 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2dzvs" event={"ID":"2f6d74fb-e91d-4838-ab58-90aa48b0bc8c","Type":"ContainerStarted","Data":"c9acaa678a04764d1c07a1404bdc9ea54cad3244d906c5182beba2769784b034"} Feb 19 21:23:08 crc kubenswrapper[4886]: I0219 21:23:08.194755 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-2dzvs" podStartSLOduration=2.194738743 podStartE2EDuration="2.194738743s" podCreationTimestamp="2026-02-19 21:23:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:23:08.112638259 +0000 UTC m=+1418.740481309" watchObservedRunningTime="2026-02-19 21:23:08.194738743 +0000 UTC m=+1418.822581793" Feb 19 21:23:08 crc kubenswrapper[4886]: I0219 21:23:08.237312 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 21:23:08 crc kubenswrapper[4886]: I0219 21:23:08.321013 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 21:23:08 crc kubenswrapper[4886]: I0219 21:23:08.356300 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 21:23:08 crc kubenswrapper[4886]: I0219 21:23:08.658379 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9c97q"] Feb 19 21:23:08 crc kubenswrapper[4886]: I0219 21:23:08.671697 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-8mbkb"] Feb 19 21:23:08 crc kubenswrapper[4886]: W0219 21:23:08.676697 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68bf5140_9213_4b4b_a7e1_ec92c4145c66.slice/crio-e9a1bd525427ee1bf2d09693ac0c8532ce6ed31f091d53c4f7f3819c632c359d WatchSource:0}: Error finding container e9a1bd525427ee1bf2d09693ac0c8532ce6ed31f091d53c4f7f3819c632c359d: Status 404 returned error can't find the container with id e9a1bd525427ee1bf2d09693ac0c8532ce6ed31f091d53c4f7f3819c632c359d Feb 19 21:23:08 crc kubenswrapper[4886]: I0219 21:23:08.695405 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 21:23:09 crc kubenswrapper[4886]: I0219 21:23:09.083776 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9c97q" event={"ID":"9dd3f102-b16f-4118-a32e-64eda5ae8047","Type":"ContainerStarted","Data":"06077776d05e160695a0e955fdec2b08900c1467c2e40c59483a3e97149a8fb0"} Feb 19 21:23:09 crc kubenswrapper[4886]: I0219 21:23:09.083842 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9c97q" event={"ID":"9dd3f102-b16f-4118-a32e-64eda5ae8047","Type":"ContainerStarted","Data":"03457f8c66730aa765c8b6a11716fd79660351e177e6a709ab2da117cefb5fce"} Feb 19 21:23:09 crc kubenswrapper[4886]: I0219 21:23:09.087483 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1e308384-cf87-4aaa-8d73-f35645e20d34","Type":"ContainerStarted","Data":"448a8d048ad3641d18ef90113e9512126fa1309a95cd97231f025e755b944312"} Feb 19 21:23:09 crc kubenswrapper[4886]: I0219 21:23:09.089962 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"868bd62d-ce89-48a0-a497-de647891f712","Type":"ContainerStarted","Data":"9ae2fb156688a31545c30d6b71aaa810cc50fc4d6e32a38254a45cce276fa168"} Feb 19 21:23:09 crc kubenswrapper[4886]: I0219 21:23:09.095867 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"68bf5140-9213-4b4b-a7e1-ec92c4145c66","Type":"ContainerStarted","Data":"e9a1bd525427ee1bf2d09693ac0c8532ce6ed31f091d53c4f7f3819c632c359d"} Feb 19 21:23:09 crc kubenswrapper[4886]: I0219 21:23:09.105871 4886 generic.go:334] "Generic (PLEG): container finished" podID="3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0" containerID="25d01f1d94092801dbc89f87de5924d0d87b5aae365fd020e46b7558d37ab937" exitCode=0 Feb 19 21:23:09 crc kubenswrapper[4886]: I0219 21:23:09.105964 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" event={"ID":"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0","Type":"ContainerDied","Data":"25d01f1d94092801dbc89f87de5924d0d87b5aae365fd020e46b7558d37ab937"} Feb 19 21:23:09 crc kubenswrapper[4886]: I0219 21:23:09.105990 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" event={"ID":"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0","Type":"ContainerStarted","Data":"a253174d8061a8ac031f77bc991c7abcf88203e7b0557b97417b6552f371729d"} Feb 19 21:23:09 crc kubenswrapper[4886]: I0219 21:23:09.107874 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-9c97q" podStartSLOduration=2.107861109 podStartE2EDuration="2.107861109s" podCreationTimestamp="2026-02-19 21:23:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:23:09.105612274 +0000 UTC m=+1419.733455324" watchObservedRunningTime="2026-02-19 21:23:09.107861109 +0000 UTC m=+1419.735704159" Feb 19 21:23:09 crc kubenswrapper[4886]: I0219 21:23:09.116497 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f270bcdf-715a-4ceb-8234-bcdca0aee4ec","Type":"ContainerStarted","Data":"79ddd48b9b548cc1e30b41e93b36aa005e262d59784f248d28673eaa2948779a"} Feb 19 21:23:10 crc kubenswrapper[4886]: I0219 21:23:10.050547 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 21:23:10 crc kubenswrapper[4886]: I0219 21:23:10.061138 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 21:23:10 crc kubenswrapper[4886]: I0219 21:23:10.128941 4886 generic.go:334] "Generic (PLEG): container finished" podID="0aec76ed-0cd6-4468-85a4-23c32dcc5708" containerID="e3376baf903f20790cba4588626b53cdaa11fd73a3818992364894e2cff04c0e" exitCode=0 Feb 19 21:23:10 crc kubenswrapper[4886]: I0219 21:23:10.129008 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-cqfbx" event={"ID":"0aec76ed-0cd6-4468-85a4-23c32dcc5708","Type":"ContainerDied","Data":"e3376baf903f20790cba4588626b53cdaa11fd73a3818992364894e2cff04c0e"} Feb 19 21:23:10 crc kubenswrapper[4886]: I0219 21:23:10.132004 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" event={"ID":"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0","Type":"ContainerStarted","Data":"dfadc562e3c2e7816b4b7caa433d368b298665dc4c5bf162f7844d8af16972d0"} Feb 19 21:23:10 crc kubenswrapper[4886]: I0219 21:23:10.169564 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" podStartSLOduration=4.1695468 podStartE2EDuration="4.1695468s" podCreationTimestamp="2026-02-19 21:23:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:23:10.16593746 +0000 UTC m=+1420.793780510" watchObservedRunningTime="2026-02-19 21:23:10.1695468 +0000 UTC m=+1420.797389850" Feb 19 21:23:11 crc kubenswrapper[4886]: I0219 21:23:11.141974 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" Feb 19 21:23:11 crc kubenswrapper[4886]: I0219 21:23:11.914249 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-cqfbx" Feb 19 21:23:11 crc kubenswrapper[4886]: I0219 21:23:11.987986 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aec76ed-0cd6-4468-85a4-23c32dcc5708-combined-ca-bundle\") pod \"0aec76ed-0cd6-4468-85a4-23c32dcc5708\" (UID: \"0aec76ed-0cd6-4468-85a4-23c32dcc5708\") " Feb 19 21:23:11 crc kubenswrapper[4886]: I0219 21:23:11.988352 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdmcr\" (UniqueName: \"kubernetes.io/projected/0aec76ed-0cd6-4468-85a4-23c32dcc5708-kube-api-access-wdmcr\") pod \"0aec76ed-0cd6-4468-85a4-23c32dcc5708\" (UID: \"0aec76ed-0cd6-4468-85a4-23c32dcc5708\") " Feb 19 21:23:11 crc kubenswrapper[4886]: I0219 21:23:11.988387 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aec76ed-0cd6-4468-85a4-23c32dcc5708-config-data\") pod \"0aec76ed-0cd6-4468-85a4-23c32dcc5708\" (UID: \"0aec76ed-0cd6-4468-85a4-23c32dcc5708\") " Feb 19 21:23:11 crc kubenswrapper[4886]: I0219 21:23:11.988410 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aec76ed-0cd6-4468-85a4-23c32dcc5708-scripts\") pod \"0aec76ed-0cd6-4468-85a4-23c32dcc5708\" (UID: \"0aec76ed-0cd6-4468-85a4-23c32dcc5708\") " Feb 19 21:23:11 crc kubenswrapper[4886]: I0219 21:23:11.999592 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aec76ed-0cd6-4468-85a4-23c32dcc5708-kube-api-access-wdmcr" (OuterVolumeSpecName: "kube-api-access-wdmcr") pod "0aec76ed-0cd6-4468-85a4-23c32dcc5708" (UID: "0aec76ed-0cd6-4468-85a4-23c32dcc5708"). InnerVolumeSpecName "kube-api-access-wdmcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:23:12 crc kubenswrapper[4886]: I0219 21:23:12.003535 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aec76ed-0cd6-4468-85a4-23c32dcc5708-scripts" (OuterVolumeSpecName: "scripts") pod "0aec76ed-0cd6-4468-85a4-23c32dcc5708" (UID: "0aec76ed-0cd6-4468-85a4-23c32dcc5708"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:12 crc kubenswrapper[4886]: I0219 21:23:12.037101 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aec76ed-0cd6-4468-85a4-23c32dcc5708-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0aec76ed-0cd6-4468-85a4-23c32dcc5708" (UID: "0aec76ed-0cd6-4468-85a4-23c32dcc5708"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:12 crc kubenswrapper[4886]: I0219 21:23:12.042553 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aec76ed-0cd6-4468-85a4-23c32dcc5708-config-data" (OuterVolumeSpecName: "config-data") pod "0aec76ed-0cd6-4468-85a4-23c32dcc5708" (UID: "0aec76ed-0cd6-4468-85a4-23c32dcc5708"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:12 crc kubenswrapper[4886]: I0219 21:23:12.091155 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aec76ed-0cd6-4468-85a4-23c32dcc5708-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:12 crc kubenswrapper[4886]: I0219 21:23:12.091185 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdmcr\" (UniqueName: \"kubernetes.io/projected/0aec76ed-0cd6-4468-85a4-23c32dcc5708-kube-api-access-wdmcr\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:12 crc kubenswrapper[4886]: I0219 21:23:12.091197 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aec76ed-0cd6-4468-85a4-23c32dcc5708-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:12 crc kubenswrapper[4886]: I0219 21:23:12.091206 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aec76ed-0cd6-4468-85a4-23c32dcc5708-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:12 crc kubenswrapper[4886]: I0219 21:23:12.158099 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-cqfbx" Feb 19 21:23:12 crc kubenswrapper[4886]: I0219 21:23:12.164775 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-cqfbx" event={"ID":"0aec76ed-0cd6-4468-85a4-23c32dcc5708","Type":"ContainerDied","Data":"6a8cd6510da88e6a3aad301ddf7c5334ca928acb27863abce3bb953244098c2a"} Feb 19 21:23:12 crc kubenswrapper[4886]: I0219 21:23:12.164836 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a8cd6510da88e6a3aad301ddf7c5334ca928acb27863abce3bb953244098c2a" Feb 19 21:23:13 crc kubenswrapper[4886]: I0219 21:23:13.174247 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"868bd62d-ce89-48a0-a497-de647891f712","Type":"ContainerStarted","Data":"f05ff13510d2f665470cbcffbe8777bf814fac828c35cffcb82c1583b7523325"} Feb 19 21:23:13 crc kubenswrapper[4886]: I0219 21:23:13.844840 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 19 21:23:13 crc kubenswrapper[4886]: E0219 21:23:13.845864 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aec76ed-0cd6-4468-85a4-23c32dcc5708" containerName="aodh-db-sync" Feb 19 21:23:13 crc kubenswrapper[4886]: I0219 21:23:13.845891 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aec76ed-0cd6-4468-85a4-23c32dcc5708" containerName="aodh-db-sync" Feb 19 21:23:13 crc kubenswrapper[4886]: I0219 21:23:13.846230 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aec76ed-0cd6-4468-85a4-23c32dcc5708" containerName="aodh-db-sync" Feb 19 21:23:13 crc kubenswrapper[4886]: I0219 21:23:13.858773 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 21:23:13 crc kubenswrapper[4886]: I0219 21:23:13.863509 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 19 21:23:13 crc kubenswrapper[4886]: I0219 21:23:13.863779 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 19 21:23:13 crc kubenswrapper[4886]: I0219 21:23:13.865888 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 19 21:23:13 crc kubenswrapper[4886]: I0219 21:23:13.870780 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-85ctw" Feb 19 21:23:13 crc kubenswrapper[4886]: I0219 21:23:13.941939 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhsfl\" (UniqueName: \"kubernetes.io/projected/289abf37-876b-4ef4-8782-9749f978c46f-kube-api-access-jhsfl\") pod \"aodh-0\" (UID: \"289abf37-876b-4ef4-8782-9749f978c46f\") " pod="openstack/aodh-0" Feb 19 21:23:13 crc kubenswrapper[4886]: I0219 21:23:13.942091 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/289abf37-876b-4ef4-8782-9749f978c46f-scripts\") pod \"aodh-0\" (UID: \"289abf37-876b-4ef4-8782-9749f978c46f\") " pod="openstack/aodh-0" Feb 19 21:23:13 crc kubenswrapper[4886]: I0219 21:23:13.942145 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289abf37-876b-4ef4-8782-9749f978c46f-combined-ca-bundle\") pod \"aodh-0\" (UID: \"289abf37-876b-4ef4-8782-9749f978c46f\") " pod="openstack/aodh-0" Feb 19 21:23:13 crc kubenswrapper[4886]: I0219 21:23:13.942189 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/289abf37-876b-4ef4-8782-9749f978c46f-config-data\") pod \"aodh-0\" (UID: \"289abf37-876b-4ef4-8782-9749f978c46f\") " pod="openstack/aodh-0" Feb 19 21:23:14 crc kubenswrapper[4886]: I0219 21:23:14.044572 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/289abf37-876b-4ef4-8782-9749f978c46f-scripts\") pod \"aodh-0\" (UID: \"289abf37-876b-4ef4-8782-9749f978c46f\") " pod="openstack/aodh-0" Feb 19 21:23:14 crc kubenswrapper[4886]: I0219 21:23:14.044646 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289abf37-876b-4ef4-8782-9749f978c46f-combined-ca-bundle\") pod \"aodh-0\" (UID: \"289abf37-876b-4ef4-8782-9749f978c46f\") " pod="openstack/aodh-0" Feb 19 21:23:14 crc kubenswrapper[4886]: I0219 21:23:14.044701 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/289abf37-876b-4ef4-8782-9749f978c46f-config-data\") pod \"aodh-0\" (UID: \"289abf37-876b-4ef4-8782-9749f978c46f\") " pod="openstack/aodh-0" Feb 19 21:23:14 crc kubenswrapper[4886]: I0219 21:23:14.044830 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhsfl\" (UniqueName: \"kubernetes.io/projected/289abf37-876b-4ef4-8782-9749f978c46f-kube-api-access-jhsfl\") pod \"aodh-0\" (UID: \"289abf37-876b-4ef4-8782-9749f978c46f\") " pod="openstack/aodh-0" Feb 19 21:23:14 crc kubenswrapper[4886]: I0219 21:23:14.051199 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/289abf37-876b-4ef4-8782-9749f978c46f-scripts\") pod \"aodh-0\" (UID: \"289abf37-876b-4ef4-8782-9749f978c46f\") " pod="openstack/aodh-0" Feb 19 21:23:14 crc kubenswrapper[4886]: I0219 21:23:14.051241 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/289abf37-876b-4ef4-8782-9749f978c46f-config-data\") pod \"aodh-0\" (UID: \"289abf37-876b-4ef4-8782-9749f978c46f\") " pod="openstack/aodh-0" Feb 19 21:23:14 crc kubenswrapper[4886]: I0219 21:23:14.053507 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289abf37-876b-4ef4-8782-9749f978c46f-combined-ca-bundle\") pod \"aodh-0\" (UID: \"289abf37-876b-4ef4-8782-9749f978c46f\") " pod="openstack/aodh-0" Feb 19 21:23:14 crc kubenswrapper[4886]: I0219 21:23:14.067690 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhsfl\" (UniqueName: \"kubernetes.io/projected/289abf37-876b-4ef4-8782-9749f978c46f-kube-api-access-jhsfl\") pod \"aodh-0\" (UID: \"289abf37-876b-4ef4-8782-9749f978c46f\") " pod="openstack/aodh-0" Feb 19 21:23:14 crc kubenswrapper[4886]: I0219 21:23:14.185647 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"868bd62d-ce89-48a0-a497-de647891f712","Type":"ContainerStarted","Data":"ee282d39ec17f9d24fc9f748cc0cdf4e7ad460169757558f33c079003cbf9bd6"} Feb 19 21:23:14 crc kubenswrapper[4886]: I0219 21:23:14.185915 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="868bd62d-ce89-48a0-a497-de647891f712" containerName="nova-metadata-metadata" containerID="cri-o://ee282d39ec17f9d24fc9f748cc0cdf4e7ad460169757558f33c079003cbf9bd6" gracePeriod=30 Feb 19 21:23:14 crc kubenswrapper[4886]: I0219 21:23:14.185790 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="868bd62d-ce89-48a0-a497-de647891f712" containerName="nova-metadata-log" containerID="cri-o://f05ff13510d2f665470cbcffbe8777bf814fac828c35cffcb82c1583b7523325" gracePeriod=30 Feb 19 21:23:14 crc kubenswrapper[4886]: I0219 21:23:14.189142 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"68bf5140-9213-4b4b-a7e1-ec92c4145c66","Type":"ContainerStarted","Data":"d20ae047f5e299f0ad7737e8201e4d6fc09fec65ff38496b98ad99f02bbe3691"} Feb 19 21:23:14 crc kubenswrapper[4886]: I0219 21:23:14.189167 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"68bf5140-9213-4b4b-a7e1-ec92c4145c66","Type":"ContainerStarted","Data":"e216e92d61e86a2826395229db125e1e9c96ca80447561479493a535dc21735b"} Feb 19 21:23:14 crc kubenswrapper[4886]: I0219 21:23:14.190563 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f270bcdf-715a-4ceb-8234-bcdca0aee4ec","Type":"ContainerStarted","Data":"a75afd40ef36dd3851590017b9cf1cf7f29d0669199952c0598f8f670c7f3a0d"} Feb 19 21:23:14 crc kubenswrapper[4886]: I0219 21:23:14.194056 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1e308384-cf87-4aaa-8d73-f35645e20d34","Type":"ContainerStarted","Data":"92188cfe74b7f7f6cb4d9947cd0922012d2ecb18f5fc9f5fda34360dd72dd38c"} Feb 19 21:23:14 crc kubenswrapper[4886]: I0219 21:23:14.194252 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="1e308384-cf87-4aaa-8d73-f35645e20d34" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://92188cfe74b7f7f6cb4d9947cd0922012d2ecb18f5fc9f5fda34360dd72dd38c" gracePeriod=30 Feb 19 21:23:14 crc kubenswrapper[4886]: I0219 21:23:14.196604 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 21:23:14 crc kubenswrapper[4886]: I0219 21:23:14.212534 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.917507554 podStartE2EDuration="8.212518575s" podCreationTimestamp="2026-02-19 21:23:06 +0000 UTC" firstStartedPulling="2026-02-19 21:23:08.218372256 +0000 UTC m=+1418.846215296" lastFinishedPulling="2026-02-19 21:23:12.513383227 +0000 UTC m=+1423.141226317" observedRunningTime="2026-02-19 21:23:14.211593592 +0000 UTC m=+1424.839436642" watchObservedRunningTime="2026-02-19 21:23:14.212518575 +0000 UTC m=+1424.840361625" Feb 19 21:23:14 crc kubenswrapper[4886]: I0219 21:23:14.235041 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.953857142 podStartE2EDuration="8.23502597s" podCreationTimestamp="2026-02-19 21:23:06 +0000 UTC" firstStartedPulling="2026-02-19 21:23:08.680354118 +0000 UTC m=+1419.308197168" lastFinishedPulling="2026-02-19 21:23:12.961522946 +0000 UTC m=+1423.589365996" observedRunningTime="2026-02-19 21:23:14.231711118 +0000 UTC m=+1424.859554168" watchObservedRunningTime="2026-02-19 21:23:14.23502597 +0000 UTC m=+1424.862869020" Feb 19 21:23:14 crc kubenswrapper[4886]: I0219 21:23:14.265768 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.715556595 podStartE2EDuration="8.265748377s" podCreationTimestamp="2026-02-19 21:23:06 +0000 UTC" firstStartedPulling="2026-02-19 21:23:08.370001115 +0000 UTC m=+1418.997844165" lastFinishedPulling="2026-02-19 21:23:12.920192887 +0000 UTC m=+1423.548035947" observedRunningTime="2026-02-19 21:23:14.257532585 +0000 UTC m=+1424.885375635" watchObservedRunningTime="2026-02-19 21:23:14.265748377 +0000 UTC m=+1424.893591427" Feb 19 21:23:14 crc kubenswrapper[4886]: I0219 21:23:14.277135 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.037845523 podStartE2EDuration="8.277119658s" podCreationTimestamp="2026-02-19 21:23:06 +0000 UTC" firstStartedPulling="2026-02-19 21:23:08.267859547 +0000 UTC m=+1418.895702597" lastFinishedPulling="2026-02-19 21:23:12.507133642 +0000 UTC m=+1423.134976732" observedRunningTime="2026-02-19 21:23:14.276445891 +0000 UTC m=+1424.904288941" watchObservedRunningTime="2026-02-19 21:23:14.277119658 +0000 UTC m=+1424.904962708" Feb 19 21:23:14 crc kubenswrapper[4886]: I0219 21:23:14.716864 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 19 21:23:14 crc kubenswrapper[4886]: I0219 21:23:14.964999 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.085342 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k9wg\" (UniqueName: \"kubernetes.io/projected/868bd62d-ce89-48a0-a497-de647891f712-kube-api-access-4k9wg\") pod \"868bd62d-ce89-48a0-a497-de647891f712\" (UID: \"868bd62d-ce89-48a0-a497-de647891f712\") " Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.085442 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/868bd62d-ce89-48a0-a497-de647891f712-logs\") pod \"868bd62d-ce89-48a0-a497-de647891f712\" (UID: \"868bd62d-ce89-48a0-a497-de647891f712\") " Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.085710 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/868bd62d-ce89-48a0-a497-de647891f712-combined-ca-bundle\") pod \"868bd62d-ce89-48a0-a497-de647891f712\" (UID: \"868bd62d-ce89-48a0-a497-de647891f712\") " Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.085754 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/868bd62d-ce89-48a0-a497-de647891f712-config-data\") pod \"868bd62d-ce89-48a0-a497-de647891f712\" (UID: \"868bd62d-ce89-48a0-a497-de647891f712\") " Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.086404 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/868bd62d-ce89-48a0-a497-de647891f712-logs" (OuterVolumeSpecName: "logs") pod "868bd62d-ce89-48a0-a497-de647891f712" (UID: "868bd62d-ce89-48a0-a497-de647891f712"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.097554 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/868bd62d-ce89-48a0-a497-de647891f712-kube-api-access-4k9wg" (OuterVolumeSpecName: "kube-api-access-4k9wg") pod "868bd62d-ce89-48a0-a497-de647891f712" (UID: "868bd62d-ce89-48a0-a497-de647891f712"). InnerVolumeSpecName "kube-api-access-4k9wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.144741 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/868bd62d-ce89-48a0-a497-de647891f712-config-data" (OuterVolumeSpecName: "config-data") pod "868bd62d-ce89-48a0-a497-de647891f712" (UID: "868bd62d-ce89-48a0-a497-de647891f712"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.187679 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/868bd62d-ce89-48a0-a497-de647891f712-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "868bd62d-ce89-48a0-a497-de647891f712" (UID: "868bd62d-ce89-48a0-a497-de647891f712"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.188587 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/868bd62d-ce89-48a0-a497-de647891f712-combined-ca-bundle\") pod \"868bd62d-ce89-48a0-a497-de647891f712\" (UID: \"868bd62d-ce89-48a0-a497-de647891f712\") " Feb 19 21:23:15 crc kubenswrapper[4886]: W0219 21:23:15.189661 4886 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/868bd62d-ce89-48a0-a497-de647891f712/volumes/kubernetes.io~secret/combined-ca-bundle Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.189697 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/868bd62d-ce89-48a0-a497-de647891f712-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "868bd62d-ce89-48a0-a497-de647891f712" (UID: "868bd62d-ce89-48a0-a497-de647891f712"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.202088 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/868bd62d-ce89-48a0-a497-de647891f712-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.202126 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/868bd62d-ce89-48a0-a497-de647891f712-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.202162 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k9wg\" (UniqueName: \"kubernetes.io/projected/868bd62d-ce89-48a0-a497-de647891f712-kube-api-access-4k9wg\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.202181 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/868bd62d-ce89-48a0-a497-de647891f712-logs\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.216600 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"289abf37-876b-4ef4-8782-9749f978c46f","Type":"ContainerStarted","Data":"6538b6d8a6bc03a3bdc9c7822675dda2cbfcfce2457427728c68495c06baa00a"} Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.223131 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.223788 4886 generic.go:334] "Generic (PLEG): container finished" podID="868bd62d-ce89-48a0-a497-de647891f712" containerID="ee282d39ec17f9d24fc9f748cc0cdf4e7ad460169757558f33c079003cbf9bd6" exitCode=0 Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.223825 4886 generic.go:334] "Generic (PLEG): container finished" podID="868bd62d-ce89-48a0-a497-de647891f712" containerID="f05ff13510d2f665470cbcffbe8777bf814fac828c35cffcb82c1583b7523325" exitCode=143 Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.223993 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"868bd62d-ce89-48a0-a497-de647891f712","Type":"ContainerDied","Data":"ee282d39ec17f9d24fc9f748cc0cdf4e7ad460169757558f33c079003cbf9bd6"} Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.224027 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"868bd62d-ce89-48a0-a497-de647891f712","Type":"ContainerDied","Data":"f05ff13510d2f665470cbcffbe8777bf814fac828c35cffcb82c1583b7523325"} Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.224039 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"868bd62d-ce89-48a0-a497-de647891f712","Type":"ContainerDied","Data":"9ae2fb156688a31545c30d6b71aaa810cc50fc4d6e32a38254a45cce276fa168"} Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.224053 4886 scope.go:117] "RemoveContainer" containerID="ee282d39ec17f9d24fc9f748cc0cdf4e7ad460169757558f33c079003cbf9bd6" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.257225 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.267113 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.351877 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 21:23:15 crc kubenswrapper[4886]: E0219 21:23:15.352778 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="868bd62d-ce89-48a0-a497-de647891f712" containerName="nova-metadata-metadata" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.352792 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="868bd62d-ce89-48a0-a497-de647891f712" containerName="nova-metadata-metadata" Feb 19 21:23:15 crc kubenswrapper[4886]: E0219 21:23:15.352830 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="868bd62d-ce89-48a0-a497-de647891f712" containerName="nova-metadata-log" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.352841 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="868bd62d-ce89-48a0-a497-de647891f712" containerName="nova-metadata-log" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.353156 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="868bd62d-ce89-48a0-a497-de647891f712" containerName="nova-metadata-log" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.353179 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="868bd62d-ce89-48a0-a497-de647891f712" containerName="nova-metadata-metadata" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.365314 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.365418 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.368509 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.368722 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.382406 4886 scope.go:117] "RemoveContainer" containerID="f05ff13510d2f665470cbcffbe8777bf814fac828c35cffcb82c1583b7523325" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.493207 4886 scope.go:117] "RemoveContainer" containerID="ee282d39ec17f9d24fc9f748cc0cdf4e7ad460169757558f33c079003cbf9bd6" Feb 19 21:23:15 crc kubenswrapper[4886]: E0219 21:23:15.493690 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee282d39ec17f9d24fc9f748cc0cdf4e7ad460169757558f33c079003cbf9bd6\": container with ID starting with ee282d39ec17f9d24fc9f748cc0cdf4e7ad460169757558f33c079003cbf9bd6 not found: ID does not exist" containerID="ee282d39ec17f9d24fc9f748cc0cdf4e7ad460169757558f33c079003cbf9bd6" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.493734 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee282d39ec17f9d24fc9f748cc0cdf4e7ad460169757558f33c079003cbf9bd6"} err="failed to get container status \"ee282d39ec17f9d24fc9f748cc0cdf4e7ad460169757558f33c079003cbf9bd6\": rpc error: code = NotFound desc = could not find container \"ee282d39ec17f9d24fc9f748cc0cdf4e7ad460169757558f33c079003cbf9bd6\": container with ID starting with ee282d39ec17f9d24fc9f748cc0cdf4e7ad460169757558f33c079003cbf9bd6 not found: ID does not exist" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.493759 4886 scope.go:117] "RemoveContainer" containerID="f05ff13510d2f665470cbcffbe8777bf814fac828c35cffcb82c1583b7523325" Feb 19 21:23:15 crc kubenswrapper[4886]: E0219 21:23:15.494063 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f05ff13510d2f665470cbcffbe8777bf814fac828c35cffcb82c1583b7523325\": container with ID starting with f05ff13510d2f665470cbcffbe8777bf814fac828c35cffcb82c1583b7523325 not found: ID does not exist" containerID="f05ff13510d2f665470cbcffbe8777bf814fac828c35cffcb82c1583b7523325" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.494113 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f05ff13510d2f665470cbcffbe8777bf814fac828c35cffcb82c1583b7523325"} err="failed to get container status \"f05ff13510d2f665470cbcffbe8777bf814fac828c35cffcb82c1583b7523325\": rpc error: code = NotFound desc = could not find container \"f05ff13510d2f665470cbcffbe8777bf814fac828c35cffcb82c1583b7523325\": container with ID starting with f05ff13510d2f665470cbcffbe8777bf814fac828c35cffcb82c1583b7523325 not found: ID does not exist" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.494141 4886 scope.go:117] "RemoveContainer" containerID="ee282d39ec17f9d24fc9f748cc0cdf4e7ad460169757558f33c079003cbf9bd6" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.494595 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee282d39ec17f9d24fc9f748cc0cdf4e7ad460169757558f33c079003cbf9bd6"} err="failed to get container status \"ee282d39ec17f9d24fc9f748cc0cdf4e7ad460169757558f33c079003cbf9bd6\": rpc error: code = NotFound desc = could not find container \"ee282d39ec17f9d24fc9f748cc0cdf4e7ad460169757558f33c079003cbf9bd6\": container with ID starting with ee282d39ec17f9d24fc9f748cc0cdf4e7ad460169757558f33c079003cbf9bd6 not found: ID does not exist" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.494618 4886 scope.go:117] "RemoveContainer" containerID="f05ff13510d2f665470cbcffbe8777bf814fac828c35cffcb82c1583b7523325" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.494976 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f05ff13510d2f665470cbcffbe8777bf814fac828c35cffcb82c1583b7523325"} err="failed to get container status \"f05ff13510d2f665470cbcffbe8777bf814fac828c35cffcb82c1583b7523325\": rpc error: code = NotFound desc = could not find container \"f05ff13510d2f665470cbcffbe8777bf814fac828c35cffcb82c1583b7523325\": container with ID starting with f05ff13510d2f665470cbcffbe8777bf814fac828c35cffcb82c1583b7523325 not found: ID does not exist" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.520790 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npvns\" (UniqueName: \"kubernetes.io/projected/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-kube-api-access-npvns\") pod \"nova-metadata-0\" (UID: \"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8\") " pod="openstack/nova-metadata-0" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.520934 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-config-data\") pod \"nova-metadata-0\" (UID: \"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8\") " pod="openstack/nova-metadata-0" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.520960 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8\") " pod="openstack/nova-metadata-0" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.521565 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-logs\") pod \"nova-metadata-0\" (UID: \"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8\") " pod="openstack/nova-metadata-0" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.521767 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8\") " pod="openstack/nova-metadata-0" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.623439 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-config-data\") pod \"nova-metadata-0\" (UID: \"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8\") " pod="openstack/nova-metadata-0" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.624239 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8\") " pod="openstack/nova-metadata-0" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.624457 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-logs\") pod \"nova-metadata-0\" (UID: \"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8\") " pod="openstack/nova-metadata-0" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.624538 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8\") " pod="openstack/nova-metadata-0" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.624585 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npvns\" (UniqueName: \"kubernetes.io/projected/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-kube-api-access-npvns\") pod \"nova-metadata-0\" (UID: \"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8\") " pod="openstack/nova-metadata-0" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.626411 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-logs\") pod \"nova-metadata-0\" (UID: \"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8\") " pod="openstack/nova-metadata-0" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.628973 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-config-data\") pod \"nova-metadata-0\" (UID: \"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8\") " pod="openstack/nova-metadata-0" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.629775 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8\") " pod="openstack/nova-metadata-0" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.630809 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8\") " pod="openstack/nova-metadata-0" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.662641 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npvns\" (UniqueName: \"kubernetes.io/projected/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-kube-api-access-npvns\") pod \"nova-metadata-0\" (UID: \"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8\") " pod="openstack/nova-metadata-0" Feb 19 21:23:15 crc kubenswrapper[4886]: I0219 21:23:15.695153 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 21:23:16 crc kubenswrapper[4886]: I0219 21:23:16.266529 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 21:23:16 crc kubenswrapper[4886]: I0219 21:23:16.598147 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:16 crc kubenswrapper[4886]: I0219 21:23:16.625087 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="868bd62d-ce89-48a0-a497-de647891f712" path="/var/lib/kubelet/pods/868bd62d-ce89-48a0-a497-de647891f712/volumes" Feb 19 21:23:16 crc kubenswrapper[4886]: I0219 21:23:16.826581 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 19 21:23:16 crc kubenswrapper[4886]: I0219 21:23:16.911102 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 21:23:16 crc kubenswrapper[4886]: I0219 21:23:16.911155 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 21:23:16 crc kubenswrapper[4886]: I0219 21:23:16.951282 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 21:23:16 crc kubenswrapper[4886]: I0219 21:23:16.955088 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.018852 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.018894 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.033135 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-phdqr"] Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.033530 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" podUID="7b6f412f-9f3e-466b-b65d-91fe1a38e212" containerName="dnsmasq-dns" containerID="cri-o://1e61704d5d8073d2137e3517293f27ebf0e154ddea0234e91dffdf3ba5e857cb" gracePeriod=10 Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.151030 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.151300 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75c0d65b-3609-4dd4-a676-8a8baa757cfe" containerName="ceilometer-central-agent" containerID="cri-o://f98a37daf2d11a3c035b4f1deb3c09ce86d68609dd852e70f7d58d1daba67de4" gracePeriod=30 Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.153364 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75c0d65b-3609-4dd4-a676-8a8baa757cfe" containerName="proxy-httpd" containerID="cri-o://56fa01b3604d3aa7fa723490f1a74e740abe1d861bb872a32a6a51eca4e55dcf" gracePeriod=30 Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.153401 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75c0d65b-3609-4dd4-a676-8a8baa757cfe" containerName="sg-core" containerID="cri-o://9f34127cac6e731efb7bb7b587134de360fe4f46f5d0318789cbe627491f256e" gracePeriod=30 Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.153413 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75c0d65b-3609-4dd4-a676-8a8baa757cfe" containerName="ceilometer-notification-agent" containerID="cri-o://c0dd2bed96786f932ceec332c7021984c902d0889c7df7e2cb747a942cf38b76" gracePeriod=30 Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.180465 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="75c0d65b-3609-4dd4-a676-8a8baa757cfe" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.233:3000/\": EOF" Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.303721 4886 generic.go:334] "Generic (PLEG): container finished" podID="2f6d74fb-e91d-4838-ab58-90aa48b0bc8c" containerID="f675f9558378fc11064443b53f590805e586c832a9a331c6160a1b849d235cfe" exitCode=0 Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.303806 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2dzvs" event={"ID":"2f6d74fb-e91d-4838-ab58-90aa48b0bc8c","Type":"ContainerDied","Data":"f675f9558378fc11064443b53f590805e586c832a9a331c6160a1b849d235cfe"} Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.326890 4886 generic.go:334] "Generic (PLEG): container finished" podID="75c0d65b-3609-4dd4-a676-8a8baa757cfe" containerID="9f34127cac6e731efb7bb7b587134de360fe4f46f5d0318789cbe627491f256e" exitCode=2 Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.327149 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75c0d65b-3609-4dd4-a676-8a8baa757cfe","Type":"ContainerDied","Data":"9f34127cac6e731efb7bb7b587134de360fe4f46f5d0318789cbe627491f256e"} Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.332738 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"289abf37-876b-4ef4-8782-9749f978c46f","Type":"ContainerStarted","Data":"b6e5babe4369fa8f4e9c46a65995605769fd12d2b75c1be9e8f74cb41f0a1923"} Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.342743 4886 generic.go:334] "Generic (PLEG): container finished" podID="7b6f412f-9f3e-466b-b65d-91fe1a38e212" containerID="1e61704d5d8073d2137e3517293f27ebf0e154ddea0234e91dffdf3ba5e857cb" exitCode=0 Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.342816 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" event={"ID":"7b6f412f-9f3e-466b-b65d-91fe1a38e212","Type":"ContainerDied","Data":"1e61704d5d8073d2137e3517293f27ebf0e154ddea0234e91dffdf3ba5e857cb"} Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.355778 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8","Type":"ContainerStarted","Data":"e33b07d5797f512e28fcf7f5da5d690f7cce83d3ff1acb2cf45f33bb76d86d86"} Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.355819 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8","Type":"ContainerStarted","Data":"8731cc61a269fc5f0327dc8cad3974afdf53bde7cb61491cd09ae1c042f66db4"} Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.355836 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8","Type":"ContainerStarted","Data":"9c13ec31a42dcd6ce58acf398637b5393f69f365cb38ea3085493df72cca34e1"} Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.418384 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.458697 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.458675581 podStartE2EDuration="2.458675581s" podCreationTimestamp="2026-02-19 21:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:23:17.380925124 +0000 UTC m=+1428.008768174" watchObservedRunningTime="2026-02-19 21:23:17.458675581 +0000 UTC m=+1428.086518631" Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.738344 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.836930 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmwt4\" (UniqueName: \"kubernetes.io/projected/7b6f412f-9f3e-466b-b65d-91fe1a38e212-kube-api-access-lmwt4\") pod \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\" (UID: \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\") " Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.837007 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-ovsdbserver-nb\") pod \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\" (UID: \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\") " Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.837129 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-dns-swift-storage-0\") pod \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\" (UID: \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\") " Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.837157 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-dns-svc\") pod \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\" (UID: \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\") " Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.837188 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-ovsdbserver-sb\") pod \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\" (UID: \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\") " Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.837255 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-config\") pod \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\" (UID: \"7b6f412f-9f3e-466b-b65d-91fe1a38e212\") " Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.912483 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b6f412f-9f3e-466b-b65d-91fe1a38e212-kube-api-access-lmwt4" (OuterVolumeSpecName: "kube-api-access-lmwt4") pod "7b6f412f-9f3e-466b-b65d-91fe1a38e212" (UID: "7b6f412f-9f3e-466b-b65d-91fe1a38e212"). InnerVolumeSpecName "kube-api-access-lmwt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.939730 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-config" (OuterVolumeSpecName: "config") pod "7b6f412f-9f3e-466b-b65d-91fe1a38e212" (UID: "7b6f412f-9f3e-466b-b65d-91fe1a38e212"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.940170 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmwt4\" (UniqueName: \"kubernetes.io/projected/7b6f412f-9f3e-466b-b65d-91fe1a38e212-kube-api-access-lmwt4\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.950682 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:17 crc kubenswrapper[4886]: I0219 21:23:17.996855 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7b6f412f-9f3e-466b-b65d-91fe1a38e212" (UID: "7b6f412f-9f3e-466b-b65d-91fe1a38e212"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:23:18 crc kubenswrapper[4886]: I0219 21:23:18.027794 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7b6f412f-9f3e-466b-b65d-91fe1a38e212" (UID: "7b6f412f-9f3e-466b-b65d-91fe1a38e212"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:23:18 crc kubenswrapper[4886]: I0219 21:23:18.053887 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:18 crc kubenswrapper[4886]: I0219 21:23:18.054080 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:18 crc kubenswrapper[4886]: I0219 21:23:18.069896 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7b6f412f-9f3e-466b-b65d-91fe1a38e212" (UID: "7b6f412f-9f3e-466b-b65d-91fe1a38e212"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:23:18 crc kubenswrapper[4886]: I0219 21:23:18.075581 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7b6f412f-9f3e-466b-b65d-91fe1a38e212" (UID: "7b6f412f-9f3e-466b-b65d-91fe1a38e212"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:23:18 crc kubenswrapper[4886]: I0219 21:23:18.104552 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="68bf5140-9213-4b4b-a7e1-ec92c4145c66" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.243:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 21:23:18 crc kubenswrapper[4886]: I0219 21:23:18.105490 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="68bf5140-9213-4b4b-a7e1-ec92c4145c66" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.243:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 21:23:18 crc kubenswrapper[4886]: I0219 21:23:18.156670 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:18 crc kubenswrapper[4886]: I0219 21:23:18.156875 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b6f412f-9f3e-466b-b65d-91fe1a38e212-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:18 crc kubenswrapper[4886]: I0219 21:23:18.365909 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" event={"ID":"7b6f412f-9f3e-466b-b65d-91fe1a38e212","Type":"ContainerDied","Data":"58886e3fc4b725226dba8db7b3cc03bcde80ebb6f9a67ba26393f5204a4c36a5"} Feb 19 21:23:18 crc kubenswrapper[4886]: I0219 21:23:18.365962 4886 scope.go:117] "RemoveContainer" containerID="1e61704d5d8073d2137e3517293f27ebf0e154ddea0234e91dffdf3ba5e857cb" Feb 19 21:23:18 crc kubenswrapper[4886]: I0219 21:23:18.365995 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-phdqr" Feb 19 21:23:18 crc kubenswrapper[4886]: I0219 21:23:18.369157 4886 generic.go:334] "Generic (PLEG): container finished" podID="75c0d65b-3609-4dd4-a676-8a8baa757cfe" containerID="56fa01b3604d3aa7fa723490f1a74e740abe1d861bb872a32a6a51eca4e55dcf" exitCode=0 Feb 19 21:23:18 crc kubenswrapper[4886]: I0219 21:23:18.369178 4886 generic.go:334] "Generic (PLEG): container finished" podID="75c0d65b-3609-4dd4-a676-8a8baa757cfe" containerID="f98a37daf2d11a3c035b4f1deb3c09ce86d68609dd852e70f7d58d1daba67de4" exitCode=0 Feb 19 21:23:18 crc kubenswrapper[4886]: I0219 21:23:18.369388 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75c0d65b-3609-4dd4-a676-8a8baa757cfe","Type":"ContainerDied","Data":"56fa01b3604d3aa7fa723490f1a74e740abe1d861bb872a32a6a51eca4e55dcf"} Feb 19 21:23:18 crc kubenswrapper[4886]: I0219 21:23:18.370049 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75c0d65b-3609-4dd4-a676-8a8baa757cfe","Type":"ContainerDied","Data":"f98a37daf2d11a3c035b4f1deb3c09ce86d68609dd852e70f7d58d1daba67de4"} Feb 19 21:23:18 crc kubenswrapper[4886]: I0219 21:23:18.421197 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-phdqr"] Feb 19 21:23:18 crc kubenswrapper[4886]: I0219 21:23:18.434020 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-phdqr"] Feb 19 21:23:18 crc kubenswrapper[4886]: I0219 21:23:18.618821 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b6f412f-9f3e-466b-b65d-91fe1a38e212" path="/var/lib/kubelet/pods/7b6f412f-9f3e-466b-b65d-91fe1a38e212/volumes" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.122717 4886 scope.go:117] "RemoveContainer" containerID="6c554772ab49b7361025247ba1bccdebc39601afbd9b65c4b8a093b554f0079b" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.319460 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2dzvs" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.383113 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f6d74fb-e91d-4838-ab58-90aa48b0bc8c-combined-ca-bundle\") pod \"2f6d74fb-e91d-4838-ab58-90aa48b0bc8c\" (UID: \"2f6d74fb-e91d-4838-ab58-90aa48b0bc8c\") " Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.383156 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98l2p\" (UniqueName: \"kubernetes.io/projected/2f6d74fb-e91d-4838-ab58-90aa48b0bc8c-kube-api-access-98l2p\") pod \"2f6d74fb-e91d-4838-ab58-90aa48b0bc8c\" (UID: \"2f6d74fb-e91d-4838-ab58-90aa48b0bc8c\") " Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.383228 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f6d74fb-e91d-4838-ab58-90aa48b0bc8c-scripts\") pod \"2f6d74fb-e91d-4838-ab58-90aa48b0bc8c\" (UID: \"2f6d74fb-e91d-4838-ab58-90aa48b0bc8c\") " Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.383409 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f6d74fb-e91d-4838-ab58-90aa48b0bc8c-config-data\") pod \"2f6d74fb-e91d-4838-ab58-90aa48b0bc8c\" (UID: \"2f6d74fb-e91d-4838-ab58-90aa48b0bc8c\") " Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.387624 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f6d74fb-e91d-4838-ab58-90aa48b0bc8c-kube-api-access-98l2p" (OuterVolumeSpecName: "kube-api-access-98l2p") pod "2f6d74fb-e91d-4838-ab58-90aa48b0bc8c" (UID: "2f6d74fb-e91d-4838-ab58-90aa48b0bc8c"). InnerVolumeSpecName "kube-api-access-98l2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.392456 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2dzvs" event={"ID":"2f6d74fb-e91d-4838-ab58-90aa48b0bc8c","Type":"ContainerDied","Data":"c9acaa678a04764d1c07a1404bdc9ea54cad3244d906c5182beba2769784b034"} Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.392495 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9acaa678a04764d1c07a1404bdc9ea54cad3244d906c5182beba2769784b034" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.392471 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2dzvs" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.394087 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6d74fb-e91d-4838-ab58-90aa48b0bc8c-scripts" (OuterVolumeSpecName: "scripts") pod "2f6d74fb-e91d-4838-ab58-90aa48b0bc8c" (UID: "2f6d74fb-e91d-4838-ab58-90aa48b0bc8c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.432241 4886 generic.go:334] "Generic (PLEG): container finished" podID="75c0d65b-3609-4dd4-a676-8a8baa757cfe" containerID="c0dd2bed96786f932ceec332c7021984c902d0889c7df7e2cb747a942cf38b76" exitCode=0 Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.432627 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75c0d65b-3609-4dd4-a676-8a8baa757cfe","Type":"ContainerDied","Data":"c0dd2bed96786f932ceec332c7021984c902d0889c7df7e2cb747a942cf38b76"} Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.447915 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6d74fb-e91d-4838-ab58-90aa48b0bc8c-config-data" (OuterVolumeSpecName: "config-data") pod "2f6d74fb-e91d-4838-ab58-90aa48b0bc8c" (UID: "2f6d74fb-e91d-4838-ab58-90aa48b0bc8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.449201 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6d74fb-e91d-4838-ab58-90aa48b0bc8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f6d74fb-e91d-4838-ab58-90aa48b0bc8c" (UID: "2f6d74fb-e91d-4838-ab58-90aa48b0bc8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.485727 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f6d74fb-e91d-4838-ab58-90aa48b0bc8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.485751 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98l2p\" (UniqueName: \"kubernetes.io/projected/2f6d74fb-e91d-4838-ab58-90aa48b0bc8c-kube-api-access-98l2p\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.485760 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f6d74fb-e91d-4838-ab58-90aa48b0bc8c-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.485769 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f6d74fb-e91d-4838-ab58-90aa48b0bc8c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.559996 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.587344 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75c0d65b-3609-4dd4-a676-8a8baa757cfe-config-data\") pod \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.587451 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6nh8\" (UniqueName: \"kubernetes.io/projected/75c0d65b-3609-4dd4-a676-8a8baa757cfe-kube-api-access-b6nh8\") pod \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.587490 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75c0d65b-3609-4dd4-a676-8a8baa757cfe-sg-core-conf-yaml\") pod \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.587606 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75c0d65b-3609-4dd4-a676-8a8baa757cfe-run-httpd\") pod \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.587648 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75c0d65b-3609-4dd4-a676-8a8baa757cfe-scripts\") pod \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.587702 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75c0d65b-3609-4dd4-a676-8a8baa757cfe-combined-ca-bundle\") pod \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.587734 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75c0d65b-3609-4dd4-a676-8a8baa757cfe-log-httpd\") pod \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\" (UID: \"75c0d65b-3609-4dd4-a676-8a8baa757cfe\") " Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.589067 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75c0d65b-3609-4dd4-a676-8a8baa757cfe-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "75c0d65b-3609-4dd4-a676-8a8baa757cfe" (UID: "75c0d65b-3609-4dd4-a676-8a8baa757cfe"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.590298 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75c0d65b-3609-4dd4-a676-8a8baa757cfe-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "75c0d65b-3609-4dd4-a676-8a8baa757cfe" (UID: "75c0d65b-3609-4dd4-a676-8a8baa757cfe"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.594057 4886 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75c0d65b-3609-4dd4-a676-8a8baa757cfe-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.594095 4886 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75c0d65b-3609-4dd4-a676-8a8baa757cfe-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.597529 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75c0d65b-3609-4dd4-a676-8a8baa757cfe-kube-api-access-b6nh8" (OuterVolumeSpecName: "kube-api-access-b6nh8") pod "75c0d65b-3609-4dd4-a676-8a8baa757cfe" (UID: "75c0d65b-3609-4dd4-a676-8a8baa757cfe"). InnerVolumeSpecName "kube-api-access-b6nh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.607046 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75c0d65b-3609-4dd4-a676-8a8baa757cfe-scripts" (OuterVolumeSpecName: "scripts") pod "75c0d65b-3609-4dd4-a676-8a8baa757cfe" (UID: "75c0d65b-3609-4dd4-a676-8a8baa757cfe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.651340 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75c0d65b-3609-4dd4-a676-8a8baa757cfe-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "75c0d65b-3609-4dd4-a676-8a8baa757cfe" (UID: "75c0d65b-3609-4dd4-a676-8a8baa757cfe"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.700792 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6nh8\" (UniqueName: \"kubernetes.io/projected/75c0d65b-3609-4dd4-a676-8a8baa757cfe-kube-api-access-b6nh8\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.700821 4886 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75c0d65b-3609-4dd4-a676-8a8baa757cfe-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.700830 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75c0d65b-3609-4dd4-a676-8a8baa757cfe-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.741816 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75c0d65b-3609-4dd4-a676-8a8baa757cfe-config-data" (OuterVolumeSpecName: "config-data") pod "75c0d65b-3609-4dd4-a676-8a8baa757cfe" (UID: "75c0d65b-3609-4dd4-a676-8a8baa757cfe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.753459 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75c0d65b-3609-4dd4-a676-8a8baa757cfe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75c0d65b-3609-4dd4-a676-8a8baa757cfe" (UID: "75c0d65b-3609-4dd4-a676-8a8baa757cfe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.802893 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75c0d65b-3609-4dd4-a676-8a8baa757cfe-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:19 crc kubenswrapper[4886]: I0219 21:23:19.802919 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75c0d65b-3609-4dd4-a676-8a8baa757cfe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.446645 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75c0d65b-3609-4dd4-a676-8a8baa757cfe","Type":"ContainerDied","Data":"f4088d9626e0061cbaa53c2be14951262a2d360a1678732eb7d6408eae1cc041"} Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.446914 4886 scope.go:117] "RemoveContainer" containerID="56fa01b3604d3aa7fa723490f1a74e740abe1d861bb872a32a6a51eca4e55dcf" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.446701 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.450628 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"289abf37-876b-4ef4-8782-9749f978c46f","Type":"ContainerStarted","Data":"7a5caf783c735afe9baa3d021033615037568552f2d2a8bfa85248380c696ba9"} Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.475444 4886 scope.go:117] "RemoveContainer" containerID="9f34127cac6e731efb7bb7b587134de360fe4f46f5d0318789cbe627491f256e" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.494325 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.511984 4886 scope.go:117] "RemoveContainer" containerID="c0dd2bed96786f932ceec332c7021984c902d0889c7df7e2cb747a942cf38b76" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.523778 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.534743 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:23:20 crc kubenswrapper[4886]: E0219 21:23:20.535135 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75c0d65b-3609-4dd4-a676-8a8baa757cfe" containerName="sg-core" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.535147 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="75c0d65b-3609-4dd4-a676-8a8baa757cfe" containerName="sg-core" Feb 19 21:23:20 crc kubenswrapper[4886]: E0219 21:23:20.535160 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b6f412f-9f3e-466b-b65d-91fe1a38e212" containerName="dnsmasq-dns" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.535165 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b6f412f-9f3e-466b-b65d-91fe1a38e212" containerName="dnsmasq-dns" Feb 19 21:23:20 crc kubenswrapper[4886]: E0219 21:23:20.535186 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f6d74fb-e91d-4838-ab58-90aa48b0bc8c" containerName="nova-manage" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.535192 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f6d74fb-e91d-4838-ab58-90aa48b0bc8c" containerName="nova-manage" Feb 19 21:23:20 crc kubenswrapper[4886]: E0219 21:23:20.535203 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b6f412f-9f3e-466b-b65d-91fe1a38e212" containerName="init" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.535208 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b6f412f-9f3e-466b-b65d-91fe1a38e212" containerName="init" Feb 19 21:23:20 crc kubenswrapper[4886]: E0219 21:23:20.535239 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75c0d65b-3609-4dd4-a676-8a8baa757cfe" containerName="ceilometer-central-agent" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.535245 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="75c0d65b-3609-4dd4-a676-8a8baa757cfe" containerName="ceilometer-central-agent" Feb 19 21:23:20 crc kubenswrapper[4886]: E0219 21:23:20.535333 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75c0d65b-3609-4dd4-a676-8a8baa757cfe" containerName="proxy-httpd" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.535340 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="75c0d65b-3609-4dd4-a676-8a8baa757cfe" containerName="proxy-httpd" Feb 19 21:23:20 crc kubenswrapper[4886]: E0219 21:23:20.535356 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75c0d65b-3609-4dd4-a676-8a8baa757cfe" containerName="ceilometer-notification-agent" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.535363 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="75c0d65b-3609-4dd4-a676-8a8baa757cfe" containerName="ceilometer-notification-agent" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.535546 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="75c0d65b-3609-4dd4-a676-8a8baa757cfe" containerName="proxy-httpd" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.535570 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="75c0d65b-3609-4dd4-a676-8a8baa757cfe" containerName="ceilometer-notification-agent" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.535585 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f6d74fb-e91d-4838-ab58-90aa48b0bc8c" containerName="nova-manage" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.535604 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b6f412f-9f3e-466b-b65d-91fe1a38e212" containerName="dnsmasq-dns" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.535619 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="75c0d65b-3609-4dd4-a676-8a8baa757cfe" containerName="sg-core" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.535638 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="75c0d65b-3609-4dd4-a676-8a8baa757cfe" containerName="ceilometer-central-agent" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.537699 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.560203 4886 scope.go:117] "RemoveContainer" containerID="f98a37daf2d11a3c035b4f1deb3c09ce86d68609dd852e70f7d58d1daba67de4" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.560534 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.561611 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.589164 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.628955 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30b792c2-edf7-43c5-9627-fb270dbdefc1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " pod="openstack/ceilometer-0" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.630157 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q22fh\" (UniqueName: \"kubernetes.io/projected/30b792c2-edf7-43c5-9627-fb270dbdefc1-kube-api-access-q22fh\") pod \"ceilometer-0\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " pod="openstack/ceilometer-0" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.631330 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30b792c2-edf7-43c5-9627-fb270dbdefc1-scripts\") pod \"ceilometer-0\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " pod="openstack/ceilometer-0" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.633676 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30b792c2-edf7-43c5-9627-fb270dbdefc1-run-httpd\") pod \"ceilometer-0\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " pod="openstack/ceilometer-0" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.633884 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b792c2-edf7-43c5-9627-fb270dbdefc1-config-data\") pod \"ceilometer-0\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " pod="openstack/ceilometer-0" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.634302 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b792c2-edf7-43c5-9627-fb270dbdefc1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " pod="openstack/ceilometer-0" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.635292 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30b792c2-edf7-43c5-9627-fb270dbdefc1-log-httpd\") pod \"ceilometer-0\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " pod="openstack/ceilometer-0" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.648713 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75c0d65b-3609-4dd4-a676-8a8baa757cfe" path="/var/lib/kubelet/pods/75c0d65b-3609-4dd4-a676-8a8baa757cfe/volumes" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.649800 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.649840 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.649852 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.650037 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9fa8df1e-a716-44da-8106-ebf2bfa8ddc8" containerName="nova-metadata-log" containerID="cri-o://8731cc61a269fc5f0327dc8cad3974afdf53bde7cb61491cd09ae1c042f66db4" gracePeriod=30 Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.650123 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f270bcdf-715a-4ceb-8234-bcdca0aee4ec" containerName="nova-scheduler-scheduler" containerID="cri-o://a75afd40ef36dd3851590017b9cf1cf7f29d0669199952c0598f8f670c7f3a0d" gracePeriod=30 Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.650171 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9fa8df1e-a716-44da-8106-ebf2bfa8ddc8" containerName="nova-metadata-metadata" containerID="cri-o://e33b07d5797f512e28fcf7f5da5d690f7cce83d3ff1acb2cf45f33bb76d86d86" gracePeriod=30 Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.650386 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="68bf5140-9213-4b4b-a7e1-ec92c4145c66" containerName="nova-api-log" containerID="cri-o://e216e92d61e86a2826395229db125e1e9c96ca80447561479493a535dc21735b" gracePeriod=30 Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.650430 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="68bf5140-9213-4b4b-a7e1-ec92c4145c66" containerName="nova-api-api" containerID="cri-o://d20ae047f5e299f0ad7737e8201e4d6fc09fec65ff38496b98ad99f02bbe3691" gracePeriod=30 Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.695379 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.695578 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.737134 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30b792c2-edf7-43c5-9627-fb270dbdefc1-scripts\") pod \"ceilometer-0\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " pod="openstack/ceilometer-0" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.737402 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30b792c2-edf7-43c5-9627-fb270dbdefc1-run-httpd\") pod \"ceilometer-0\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " pod="openstack/ceilometer-0" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.737524 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b792c2-edf7-43c5-9627-fb270dbdefc1-config-data\") pod \"ceilometer-0\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " pod="openstack/ceilometer-0" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.737673 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b792c2-edf7-43c5-9627-fb270dbdefc1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " pod="openstack/ceilometer-0" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.737939 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30b792c2-edf7-43c5-9627-fb270dbdefc1-run-httpd\") pod \"ceilometer-0\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " pod="openstack/ceilometer-0" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.738156 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30b792c2-edf7-43c5-9627-fb270dbdefc1-log-httpd\") pod \"ceilometer-0\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " pod="openstack/ceilometer-0" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.738428 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30b792c2-edf7-43c5-9627-fb270dbdefc1-log-httpd\") pod \"ceilometer-0\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " pod="openstack/ceilometer-0" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.738478 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30b792c2-edf7-43c5-9627-fb270dbdefc1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " pod="openstack/ceilometer-0" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.738572 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q22fh\" (UniqueName: \"kubernetes.io/projected/30b792c2-edf7-43c5-9627-fb270dbdefc1-kube-api-access-q22fh\") pod \"ceilometer-0\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " pod="openstack/ceilometer-0" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.742304 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30b792c2-edf7-43c5-9627-fb270dbdefc1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " pod="openstack/ceilometer-0" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.742887 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30b792c2-edf7-43c5-9627-fb270dbdefc1-scripts\") pod \"ceilometer-0\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " pod="openstack/ceilometer-0" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.745829 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b792c2-edf7-43c5-9627-fb270dbdefc1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " pod="openstack/ceilometer-0" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.750227 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b792c2-edf7-43c5-9627-fb270dbdefc1-config-data\") pod \"ceilometer-0\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " pod="openstack/ceilometer-0" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.758961 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q22fh\" (UniqueName: \"kubernetes.io/projected/30b792c2-edf7-43c5-9627-fb270dbdefc1-kube-api-access-q22fh\") pod \"ceilometer-0\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " pod="openstack/ceilometer-0" Feb 19 21:23:20 crc kubenswrapper[4886]: I0219 21:23:20.894492 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:23:21 crc kubenswrapper[4886]: I0219 21:23:21.471624 4886 generic.go:334] "Generic (PLEG): container finished" podID="9fa8df1e-a716-44da-8106-ebf2bfa8ddc8" containerID="e33b07d5797f512e28fcf7f5da5d690f7cce83d3ff1acb2cf45f33bb76d86d86" exitCode=0 Feb 19 21:23:21 crc kubenswrapper[4886]: I0219 21:23:21.471915 4886 generic.go:334] "Generic (PLEG): container finished" podID="9fa8df1e-a716-44da-8106-ebf2bfa8ddc8" containerID="8731cc61a269fc5f0327dc8cad3974afdf53bde7cb61491cd09ae1c042f66db4" exitCode=143 Feb 19 21:23:21 crc kubenswrapper[4886]: I0219 21:23:21.471735 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8","Type":"ContainerDied","Data":"e33b07d5797f512e28fcf7f5da5d690f7cce83d3ff1acb2cf45f33bb76d86d86"} Feb 19 21:23:21 crc kubenswrapper[4886]: I0219 21:23:21.472125 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8","Type":"ContainerDied","Data":"8731cc61a269fc5f0327dc8cad3974afdf53bde7cb61491cd09ae1c042f66db4"} Feb 19 21:23:21 crc kubenswrapper[4886]: I0219 21:23:21.477368 4886 generic.go:334] "Generic (PLEG): container finished" podID="68bf5140-9213-4b4b-a7e1-ec92c4145c66" containerID="e216e92d61e86a2826395229db125e1e9c96ca80447561479493a535dc21735b" exitCode=143 Feb 19 21:23:21 crc kubenswrapper[4886]: I0219 21:23:21.477404 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"68bf5140-9213-4b4b-a7e1-ec92c4145c66","Type":"ContainerDied","Data":"e216e92d61e86a2826395229db125e1e9c96ca80447561479493a535dc21735b"} Feb 19 21:23:21 crc kubenswrapper[4886]: E0219 21:23:21.913592 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a75afd40ef36dd3851590017b9cf1cf7f29d0669199952c0598f8f670c7f3a0d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 21:23:21 crc kubenswrapper[4886]: E0219 21:23:21.915079 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a75afd40ef36dd3851590017b9cf1cf7f29d0669199952c0598f8f670c7f3a0d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 21:23:21 crc kubenswrapper[4886]: E0219 21:23:21.916839 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a75afd40ef36dd3851590017b9cf1cf7f29d0669199952c0598f8f670c7f3a0d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 21:23:21 crc kubenswrapper[4886]: E0219 21:23:21.916873 4886 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f270bcdf-715a-4ceb-8234-bcdca0aee4ec" containerName="nova-scheduler-scheduler" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.254941 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.292653 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-config-data\") pod \"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8\" (UID: \"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8\") " Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.292697 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npvns\" (UniqueName: \"kubernetes.io/projected/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-kube-api-access-npvns\") pod \"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8\" (UID: \"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8\") " Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.292786 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-combined-ca-bundle\") pod \"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8\" (UID: \"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8\") " Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.292859 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-logs\") pod \"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8\" (UID: \"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8\") " Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.292937 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-nova-metadata-tls-certs\") pod \"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8\" (UID: \"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8\") " Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.300086 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-kube-api-access-npvns" (OuterVolumeSpecName: "kube-api-access-npvns") pod "9fa8df1e-a716-44da-8106-ebf2bfa8ddc8" (UID: "9fa8df1e-a716-44da-8106-ebf2bfa8ddc8"). InnerVolumeSpecName "kube-api-access-npvns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.316723 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-logs" (OuterVolumeSpecName: "logs") pod "9fa8df1e-a716-44da-8106-ebf2bfa8ddc8" (UID: "9fa8df1e-a716-44da-8106-ebf2bfa8ddc8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.333055 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9fa8df1e-a716-44da-8106-ebf2bfa8ddc8" (UID: "9fa8df1e-a716-44da-8106-ebf2bfa8ddc8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.333156 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-config-data" (OuterVolumeSpecName: "config-data") pod "9fa8df1e-a716-44da-8106-ebf2bfa8ddc8" (UID: "9fa8df1e-a716-44da-8106-ebf2bfa8ddc8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.373009 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "9fa8df1e-a716-44da-8106-ebf2bfa8ddc8" (UID: "9fa8df1e-a716-44da-8106-ebf2bfa8ddc8"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.396316 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-logs\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.396366 4886 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.396382 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.396395 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npvns\" (UniqueName: \"kubernetes.io/projected/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-kube-api-access-npvns\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.396407 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.489153 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9fa8df1e-a716-44da-8106-ebf2bfa8ddc8","Type":"ContainerDied","Data":"9c13ec31a42dcd6ce58acf398637b5393f69f365cb38ea3085493df72cca34e1"} Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.490450 4886 scope.go:117] "RemoveContainer" containerID="e33b07d5797f512e28fcf7f5da5d690f7cce83d3ff1acb2cf45f33bb76d86d86" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.490566 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.530274 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.539706 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.565509 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 21:23:22 crc kubenswrapper[4886]: E0219 21:23:22.566144 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa8df1e-a716-44da-8106-ebf2bfa8ddc8" containerName="nova-metadata-log" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.566169 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa8df1e-a716-44da-8106-ebf2bfa8ddc8" containerName="nova-metadata-log" Feb 19 21:23:22 crc kubenswrapper[4886]: E0219 21:23:22.566205 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa8df1e-a716-44da-8106-ebf2bfa8ddc8" containerName="nova-metadata-metadata" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.566215 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa8df1e-a716-44da-8106-ebf2bfa8ddc8" containerName="nova-metadata-metadata" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.566556 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa8df1e-a716-44da-8106-ebf2bfa8ddc8" containerName="nova-metadata-metadata" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.566602 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa8df1e-a716-44da-8106-ebf2bfa8ddc8" containerName="nova-metadata-log" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.570085 4886 scope.go:117] "RemoveContainer" containerID="8731cc61a269fc5f0327dc8cad3974afdf53bde7cb61491cd09ae1c042f66db4" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.571458 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.574318 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.574554 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.599866 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fe39c39-ee1f-49c5-bd83-a984ee05475d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2fe39c39-ee1f-49c5-bd83-a984ee05475d\") " pod="openstack/nova-metadata-0" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.600062 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe39c39-ee1f-49c5-bd83-a984ee05475d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2fe39c39-ee1f-49c5-bd83-a984ee05475d\") " pod="openstack/nova-metadata-0" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.600145 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fe39c39-ee1f-49c5-bd83-a984ee05475d-logs\") pod \"nova-metadata-0\" (UID: \"2fe39c39-ee1f-49c5-bd83-a984ee05475d\") " pod="openstack/nova-metadata-0" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.600205 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2cxr\" (UniqueName: \"kubernetes.io/projected/2fe39c39-ee1f-49c5-bd83-a984ee05475d-kube-api-access-v2cxr\") pod \"nova-metadata-0\" (UID: \"2fe39c39-ee1f-49c5-bd83-a984ee05475d\") " pod="openstack/nova-metadata-0" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.600338 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe39c39-ee1f-49c5-bd83-a984ee05475d-config-data\") pod \"nova-metadata-0\" (UID: \"2fe39c39-ee1f-49c5-bd83-a984ee05475d\") " pod="openstack/nova-metadata-0" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.670689 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fa8df1e-a716-44da-8106-ebf2bfa8ddc8" path="/var/lib/kubelet/pods/9fa8df1e-a716-44da-8106-ebf2bfa8ddc8/volumes" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.671457 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.675042 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.702728 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2cxr\" (UniqueName: \"kubernetes.io/projected/2fe39c39-ee1f-49c5-bd83-a984ee05475d-kube-api-access-v2cxr\") pod \"nova-metadata-0\" (UID: \"2fe39c39-ee1f-49c5-bd83-a984ee05475d\") " pod="openstack/nova-metadata-0" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.702859 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe39c39-ee1f-49c5-bd83-a984ee05475d-config-data\") pod \"nova-metadata-0\" (UID: \"2fe39c39-ee1f-49c5-bd83-a984ee05475d\") " pod="openstack/nova-metadata-0" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.704866 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fe39c39-ee1f-49c5-bd83-a984ee05475d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2fe39c39-ee1f-49c5-bd83-a984ee05475d\") " pod="openstack/nova-metadata-0" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.711374 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fe39c39-ee1f-49c5-bd83-a984ee05475d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2fe39c39-ee1f-49c5-bd83-a984ee05475d\") " pod="openstack/nova-metadata-0" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.717799 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe39c39-ee1f-49c5-bd83-a984ee05475d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2fe39c39-ee1f-49c5-bd83-a984ee05475d\") " pod="openstack/nova-metadata-0" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.718913 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fe39c39-ee1f-49c5-bd83-a984ee05475d-logs\") pod \"nova-metadata-0\" (UID: \"2fe39c39-ee1f-49c5-bd83-a984ee05475d\") " pod="openstack/nova-metadata-0" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.722146 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fe39c39-ee1f-49c5-bd83-a984ee05475d-logs\") pod \"nova-metadata-0\" (UID: \"2fe39c39-ee1f-49c5-bd83-a984ee05475d\") " pod="openstack/nova-metadata-0" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.724636 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe39c39-ee1f-49c5-bd83-a984ee05475d-config-data\") pod \"nova-metadata-0\" (UID: \"2fe39c39-ee1f-49c5-bd83-a984ee05475d\") " pod="openstack/nova-metadata-0" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.725484 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe39c39-ee1f-49c5-bd83-a984ee05475d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2fe39c39-ee1f-49c5-bd83-a984ee05475d\") " pod="openstack/nova-metadata-0" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.725718 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2cxr\" (UniqueName: \"kubernetes.io/projected/2fe39c39-ee1f-49c5-bd83-a984ee05475d-kube-api-access-v2cxr\") pod \"nova-metadata-0\" (UID: \"2fe39c39-ee1f-49c5-bd83-a984ee05475d\") " pod="openstack/nova-metadata-0" Feb 19 21:23:22 crc kubenswrapper[4886]: I0219 21:23:22.896475 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 21:23:23 crc kubenswrapper[4886]: I0219 21:23:23.503901 4886 generic.go:334] "Generic (PLEG): container finished" podID="f270bcdf-715a-4ceb-8234-bcdca0aee4ec" containerID="a75afd40ef36dd3851590017b9cf1cf7f29d0669199952c0598f8f670c7f3a0d" exitCode=0 Feb 19 21:23:23 crc kubenswrapper[4886]: I0219 21:23:23.503977 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f270bcdf-715a-4ceb-8234-bcdca0aee4ec","Type":"ContainerDied","Data":"a75afd40ef36dd3851590017b9cf1cf7f29d0669199952c0598f8f670c7f3a0d"} Feb 19 21:23:23 crc kubenswrapper[4886]: I0219 21:23:23.505777 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30b792c2-edf7-43c5-9627-fb270dbdefc1","Type":"ContainerStarted","Data":"6ae6fef2e65250b32f203317d700853e8e440afa84decf61dc3dff5313a41e50"} Feb 19 21:23:23 crc kubenswrapper[4886]: I0219 21:23:23.528411 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 21:23:24 crc kubenswrapper[4886]: I0219 21:23:24.519543 4886 generic.go:334] "Generic (PLEG): container finished" podID="68bf5140-9213-4b4b-a7e1-ec92c4145c66" containerID="d20ae047f5e299f0ad7737e8201e4d6fc09fec65ff38496b98ad99f02bbe3691" exitCode=0 Feb 19 21:23:24 crc kubenswrapper[4886]: I0219 21:23:24.519629 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"68bf5140-9213-4b4b-a7e1-ec92c4145c66","Type":"ContainerDied","Data":"d20ae047f5e299f0ad7737e8201e4d6fc09fec65ff38496b98ad99f02bbe3691"} Feb 19 21:23:24 crc kubenswrapper[4886]: I0219 21:23:24.521423 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2fe39c39-ee1f-49c5-bd83-a984ee05475d","Type":"ContainerStarted","Data":"54f23a724a94b214441c93a63f27ae4b285ea2bdbb1d77f1660cbdf22a409311"} Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.449302 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.456616 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.503618 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f270bcdf-715a-4ceb-8234-bcdca0aee4ec-combined-ca-bundle\") pod \"f270bcdf-715a-4ceb-8234-bcdca0aee4ec\" (UID: \"f270bcdf-715a-4ceb-8234-bcdca0aee4ec\") " Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.504070 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68bf5140-9213-4b4b-a7e1-ec92c4145c66-logs\") pod \"68bf5140-9213-4b4b-a7e1-ec92c4145c66\" (UID: \"68bf5140-9213-4b4b-a7e1-ec92c4145c66\") " Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.504096 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68bf5140-9213-4b4b-a7e1-ec92c4145c66-combined-ca-bundle\") pod \"68bf5140-9213-4b4b-a7e1-ec92c4145c66\" (UID: \"68bf5140-9213-4b4b-a7e1-ec92c4145c66\") " Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.504211 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68bf5140-9213-4b4b-a7e1-ec92c4145c66-config-data\") pod \"68bf5140-9213-4b4b-a7e1-ec92c4145c66\" (UID: \"68bf5140-9213-4b4b-a7e1-ec92c4145c66\") " Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.504239 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f270bcdf-715a-4ceb-8234-bcdca0aee4ec-config-data\") pod \"f270bcdf-715a-4ceb-8234-bcdca0aee4ec\" (UID: \"f270bcdf-715a-4ceb-8234-bcdca0aee4ec\") " Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.504280 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqdvh\" (UniqueName: \"kubernetes.io/projected/68bf5140-9213-4b4b-a7e1-ec92c4145c66-kube-api-access-cqdvh\") pod \"68bf5140-9213-4b4b-a7e1-ec92c4145c66\" (UID: \"68bf5140-9213-4b4b-a7e1-ec92c4145c66\") " Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.504356 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pq69\" (UniqueName: \"kubernetes.io/projected/f270bcdf-715a-4ceb-8234-bcdca0aee4ec-kube-api-access-9pq69\") pod \"f270bcdf-715a-4ceb-8234-bcdca0aee4ec\" (UID: \"f270bcdf-715a-4ceb-8234-bcdca0aee4ec\") " Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.571859 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f270bcdf-715a-4ceb-8234-bcdca0aee4ec-kube-api-access-9pq69" (OuterVolumeSpecName: "kube-api-access-9pq69") pod "f270bcdf-715a-4ceb-8234-bcdca0aee4ec" (UID: "f270bcdf-715a-4ceb-8234-bcdca0aee4ec"). InnerVolumeSpecName "kube-api-access-9pq69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.572472 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68bf5140-9213-4b4b-a7e1-ec92c4145c66-logs" (OuterVolumeSpecName: "logs") pod "68bf5140-9213-4b4b-a7e1-ec92c4145c66" (UID: "68bf5140-9213-4b4b-a7e1-ec92c4145c66"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.575576 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68bf5140-9213-4b4b-a7e1-ec92c4145c66-kube-api-access-cqdvh" (OuterVolumeSpecName: "kube-api-access-cqdvh") pod "68bf5140-9213-4b4b-a7e1-ec92c4145c66" (UID: "68bf5140-9213-4b4b-a7e1-ec92c4145c66"). InnerVolumeSpecName "kube-api-access-cqdvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.583969 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f270bcdf-715a-4ceb-8234-bcdca0aee4ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f270bcdf-715a-4ceb-8234-bcdca0aee4ec" (UID: "f270bcdf-715a-4ceb-8234-bcdca0aee4ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.596889 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68bf5140-9213-4b4b-a7e1-ec92c4145c66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68bf5140-9213-4b4b-a7e1-ec92c4145c66" (UID: "68bf5140-9213-4b4b-a7e1-ec92c4145c66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.607378 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68bf5140-9213-4b4b-a7e1-ec92c4145c66-logs\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.607408 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68bf5140-9213-4b4b-a7e1-ec92c4145c66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.607456 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqdvh\" (UniqueName: \"kubernetes.io/projected/68bf5140-9213-4b4b-a7e1-ec92c4145c66-kube-api-access-cqdvh\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.607469 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pq69\" (UniqueName: \"kubernetes.io/projected/f270bcdf-715a-4ceb-8234-bcdca0aee4ec-kube-api-access-9pq69\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.607481 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f270bcdf-715a-4ceb-8234-bcdca0aee4ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.607691 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"68bf5140-9213-4b4b-a7e1-ec92c4145c66","Type":"ContainerDied","Data":"e9a1bd525427ee1bf2d09693ac0c8532ce6ed31f091d53c4f7f3819c632c359d"} Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.607736 4886 scope.go:117] "RemoveContainer" containerID="d20ae047f5e299f0ad7737e8201e4d6fc09fec65ff38496b98ad99f02bbe3691" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.607868 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.615190 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f270bcdf-715a-4ceb-8234-bcdca0aee4ec-config-data" (OuterVolumeSpecName: "config-data") pod "f270bcdf-715a-4ceb-8234-bcdca0aee4ec" (UID: "f270bcdf-715a-4ceb-8234-bcdca0aee4ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.618464 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f270bcdf-715a-4ceb-8234-bcdca0aee4ec","Type":"ContainerDied","Data":"79ddd48b9b548cc1e30b41e93b36aa005e262d59784f248d28673eaa2948779a"} Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.618563 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.628486 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2fe39c39-ee1f-49c5-bd83-a984ee05475d","Type":"ContainerStarted","Data":"2960778f030236b823fbbda56c18ddf34bce71d650de3ab8ee27ac78cfa6a373"} Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.628531 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2fe39c39-ee1f-49c5-bd83-a984ee05475d","Type":"ContainerStarted","Data":"39192fd525ed71777feb31914082938616323fc2c5b40d13abc233f694b6078d"} Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.635845 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68bf5140-9213-4b4b-a7e1-ec92c4145c66-config-data" (OuterVolumeSpecName: "config-data") pod "68bf5140-9213-4b4b-a7e1-ec92c4145c66" (UID: "68bf5140-9213-4b4b-a7e1-ec92c4145c66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.670397 4886 scope.go:117] "RemoveContainer" containerID="e216e92d61e86a2826395229db125e1e9c96ca80447561479493a535dc21735b" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.682444 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.68241266 podStartE2EDuration="3.68241266s" podCreationTimestamp="2026-02-19 21:23:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:23:25.651116048 +0000 UTC m=+1436.278959118" watchObservedRunningTime="2026-02-19 21:23:25.68241266 +0000 UTC m=+1436.310255710" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.687737 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.702069 4886 scope.go:117] "RemoveContainer" containerID="a75afd40ef36dd3851590017b9cf1cf7f29d0669199952c0598f8f670c7f3a0d" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.704800 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.710199 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68bf5140-9213-4b4b-a7e1-ec92c4145c66-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.710237 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f270bcdf-715a-4ceb-8234-bcdca0aee4ec-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.718707 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 21:23:25 crc kubenswrapper[4886]: E0219 21:23:25.719307 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68bf5140-9213-4b4b-a7e1-ec92c4145c66" containerName="nova-api-log" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.719462 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="68bf5140-9213-4b4b-a7e1-ec92c4145c66" containerName="nova-api-log" Feb 19 21:23:25 crc kubenswrapper[4886]: E0219 21:23:25.719533 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68bf5140-9213-4b4b-a7e1-ec92c4145c66" containerName="nova-api-api" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.719591 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="68bf5140-9213-4b4b-a7e1-ec92c4145c66" containerName="nova-api-api" Feb 19 21:23:25 crc kubenswrapper[4886]: E0219 21:23:25.719674 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f270bcdf-715a-4ceb-8234-bcdca0aee4ec" containerName="nova-scheduler-scheduler" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.719725 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f270bcdf-715a-4ceb-8234-bcdca0aee4ec" containerName="nova-scheduler-scheduler" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.719986 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="68bf5140-9213-4b4b-a7e1-ec92c4145c66" containerName="nova-api-log" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.720043 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="68bf5140-9213-4b4b-a7e1-ec92c4145c66" containerName="nova-api-api" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.720101 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f270bcdf-715a-4ceb-8234-bcdca0aee4ec" containerName="nova-scheduler-scheduler" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.720932 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.726433 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.729212 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.811587 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcsd8\" (UniqueName: \"kubernetes.io/projected/d2615997-dab1-4bc3-a5cd-1f89031674c8-kube-api-access-fcsd8\") pod \"nova-scheduler-0\" (UID: \"d2615997-dab1-4bc3-a5cd-1f89031674c8\") " pod="openstack/nova-scheduler-0" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.811927 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2615997-dab1-4bc3-a5cd-1f89031674c8-config-data\") pod \"nova-scheduler-0\" (UID: \"d2615997-dab1-4bc3-a5cd-1f89031674c8\") " pod="openstack/nova-scheduler-0" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.812343 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2615997-dab1-4bc3-a5cd-1f89031674c8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d2615997-dab1-4bc3-a5cd-1f89031674c8\") " pod="openstack/nova-scheduler-0" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.913995 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcsd8\" (UniqueName: \"kubernetes.io/projected/d2615997-dab1-4bc3-a5cd-1f89031674c8-kube-api-access-fcsd8\") pod \"nova-scheduler-0\" (UID: \"d2615997-dab1-4bc3-a5cd-1f89031674c8\") " pod="openstack/nova-scheduler-0" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.914233 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2615997-dab1-4bc3-a5cd-1f89031674c8-config-data\") pod \"nova-scheduler-0\" (UID: \"d2615997-dab1-4bc3-a5cd-1f89031674c8\") " pod="openstack/nova-scheduler-0" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.914422 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2615997-dab1-4bc3-a5cd-1f89031674c8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d2615997-dab1-4bc3-a5cd-1f89031674c8\") " pod="openstack/nova-scheduler-0" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.918110 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2615997-dab1-4bc3-a5cd-1f89031674c8-config-data\") pod \"nova-scheduler-0\" (UID: \"d2615997-dab1-4bc3-a5cd-1f89031674c8\") " pod="openstack/nova-scheduler-0" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.918365 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2615997-dab1-4bc3-a5cd-1f89031674c8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d2615997-dab1-4bc3-a5cd-1f89031674c8\") " pod="openstack/nova-scheduler-0" Feb 19 21:23:25 crc kubenswrapper[4886]: I0219 21:23:25.930355 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcsd8\" (UniqueName: \"kubernetes.io/projected/d2615997-dab1-4bc3-a5cd-1f89031674c8-kube-api-access-fcsd8\") pod \"nova-scheduler-0\" (UID: \"d2615997-dab1-4bc3-a5cd-1f89031674c8\") " pod="openstack/nova-scheduler-0" Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.009159 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.037054 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.051462 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.052103 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.053714 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.057613 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.071623 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.119343 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9902d175-54e2-470a-b837-b69acf4393d8-config-data\") pod \"nova-api-0\" (UID: \"9902d175-54e2-470a-b837-b69acf4393d8\") " pod="openstack/nova-api-0" Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.119521 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9902d175-54e2-470a-b837-b69acf4393d8-logs\") pod \"nova-api-0\" (UID: \"9902d175-54e2-470a-b837-b69acf4393d8\") " pod="openstack/nova-api-0" Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.119639 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9902d175-54e2-470a-b837-b69acf4393d8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9902d175-54e2-470a-b837-b69acf4393d8\") " pod="openstack/nova-api-0" Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.119690 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f7pt\" (UniqueName: \"kubernetes.io/projected/9902d175-54e2-470a-b837-b69acf4393d8-kube-api-access-5f7pt\") pod \"nova-api-0\" (UID: \"9902d175-54e2-470a-b837-b69acf4393d8\") " pod="openstack/nova-api-0" Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.221628 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9902d175-54e2-470a-b837-b69acf4393d8-config-data\") pod \"nova-api-0\" (UID: \"9902d175-54e2-470a-b837-b69acf4393d8\") " pod="openstack/nova-api-0" Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.222060 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9902d175-54e2-470a-b837-b69acf4393d8-logs\") pod \"nova-api-0\" (UID: \"9902d175-54e2-470a-b837-b69acf4393d8\") " pod="openstack/nova-api-0" Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.222444 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9902d175-54e2-470a-b837-b69acf4393d8-logs\") pod \"nova-api-0\" (UID: \"9902d175-54e2-470a-b837-b69acf4393d8\") " pod="openstack/nova-api-0" Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.222556 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9902d175-54e2-470a-b837-b69acf4393d8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9902d175-54e2-470a-b837-b69acf4393d8\") " pod="openstack/nova-api-0" Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.222575 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f7pt\" (UniqueName: \"kubernetes.io/projected/9902d175-54e2-470a-b837-b69acf4393d8-kube-api-access-5f7pt\") pod \"nova-api-0\" (UID: \"9902d175-54e2-470a-b837-b69acf4393d8\") " pod="openstack/nova-api-0" Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.228981 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9902d175-54e2-470a-b837-b69acf4393d8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9902d175-54e2-470a-b837-b69acf4393d8\") " pod="openstack/nova-api-0" Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.235917 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9902d175-54e2-470a-b837-b69acf4393d8-config-data\") pod \"nova-api-0\" (UID: \"9902d175-54e2-470a-b837-b69acf4393d8\") " pod="openstack/nova-api-0" Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.264709 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f7pt\" (UniqueName: \"kubernetes.io/projected/9902d175-54e2-470a-b837-b69acf4393d8-kube-api-access-5f7pt\") pod \"nova-api-0\" (UID: \"9902d175-54e2-470a-b837-b69acf4393d8\") " pod="openstack/nova-api-0" Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.385933 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.630844 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68bf5140-9213-4b4b-a7e1-ec92c4145c66" path="/var/lib/kubelet/pods/68bf5140-9213-4b4b-a7e1-ec92c4145c66/volumes" Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.632437 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f270bcdf-715a-4ceb-8234-bcdca0aee4ec" path="/var/lib/kubelet/pods/f270bcdf-715a-4ceb-8234-bcdca0aee4ec/volumes" Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.659821 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"289abf37-876b-4ef4-8782-9749f978c46f","Type":"ContainerStarted","Data":"9c0678faa3269b581c0a73fd70eafd59e53b5b6dcf68b9c07423c098d2dd578e"} Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.662311 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30b792c2-edf7-43c5-9627-fb270dbdefc1","Type":"ContainerStarted","Data":"b00819b92c5713517370d9fe5ebe37291e0a0e660cfdd39dc9c75293aeb25446"} Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.688967 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 21:23:26 crc kubenswrapper[4886]: I0219 21:23:26.921287 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 21:23:26 crc kubenswrapper[4886]: W0219 21:23:26.930757 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9902d175_54e2_470a_b837_b69acf4393d8.slice/crio-299a2736af6b1f8fc669a0493ea6f9d167ffb31b3b6208bdbedb0db74f93ae6b WatchSource:0}: Error finding container 299a2736af6b1f8fc669a0493ea6f9d167ffb31b3b6208bdbedb0db74f93ae6b: Status 404 returned error can't find the container with id 299a2736af6b1f8fc669a0493ea6f9d167ffb31b3b6208bdbedb0db74f93ae6b Feb 19 21:23:27 crc kubenswrapper[4886]: I0219 21:23:27.692402 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d2615997-dab1-4bc3-a5cd-1f89031674c8","Type":"ContainerStarted","Data":"e434787c1ac46e3a529d26a14b6c40f92f648f09e2b8b7d25fff60b0cb7b6c83"} Feb 19 21:23:27 crc kubenswrapper[4886]: I0219 21:23:27.692649 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d2615997-dab1-4bc3-a5cd-1f89031674c8","Type":"ContainerStarted","Data":"91440b02bbdf51b6dbf25b4ffdc4828103a98877db388884de5a446f20350cc4"} Feb 19 21:23:27 crc kubenswrapper[4886]: I0219 21:23:27.705736 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9902d175-54e2-470a-b837-b69acf4393d8","Type":"ContainerStarted","Data":"cab1c5543420d3e7b9d6e28ba128ec3fa1923130cec862b585fd2e1b63897f26"} Feb 19 21:23:27 crc kubenswrapper[4886]: I0219 21:23:27.705768 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9902d175-54e2-470a-b837-b69acf4393d8","Type":"ContainerStarted","Data":"f4f3c23d68eaaad9cb395300da7bed6e5ddeeaeccdfe8ce698fda819ea141e67"} Feb 19 21:23:27 crc kubenswrapper[4886]: I0219 21:23:27.705777 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9902d175-54e2-470a-b837-b69acf4393d8","Type":"ContainerStarted","Data":"299a2736af6b1f8fc669a0493ea6f9d167ffb31b3b6208bdbedb0db74f93ae6b"} Feb 19 21:23:27 crc kubenswrapper[4886]: I0219 21:23:27.711301 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.711249648 podStartE2EDuration="2.711249648s" podCreationTimestamp="2026-02-19 21:23:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:23:27.708030879 +0000 UTC m=+1438.335873929" watchObservedRunningTime="2026-02-19 21:23:27.711249648 +0000 UTC m=+1438.339092698" Feb 19 21:23:27 crc kubenswrapper[4886]: I0219 21:23:27.745104 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.745089192 podStartE2EDuration="2.745089192s" podCreationTimestamp="2026-02-19 21:23:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:23:27.741698489 +0000 UTC m=+1438.369541539" watchObservedRunningTime="2026-02-19 21:23:27.745089192 +0000 UTC m=+1438.372932232" Feb 19 21:23:27 crc kubenswrapper[4886]: I0219 21:23:27.897401 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 21:23:27 crc kubenswrapper[4886]: I0219 21:23:27.900192 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 21:23:29 crc kubenswrapper[4886]: I0219 21:23:29.732005 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30b792c2-edf7-43c5-9627-fb270dbdefc1","Type":"ContainerStarted","Data":"a56ba521409ae9f616f127e0a171fee74b79473b3d58d7e94b781661f8b242b3"} Feb 19 21:23:30 crc kubenswrapper[4886]: I0219 21:23:30.747307 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"289abf37-876b-4ef4-8782-9749f978c46f","Type":"ContainerStarted","Data":"1be42c2a3daa8ceb41689ab5d9e3a69bd9fea91cc818e0fe2fe86e66bf67768a"} Feb 19 21:23:30 crc kubenswrapper[4886]: I0219 21:23:30.747511 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="289abf37-876b-4ef4-8782-9749f978c46f" containerName="aodh-api" containerID="cri-o://b6e5babe4369fa8f4e9c46a65995605769fd12d2b75c1be9e8f74cb41f0a1923" gracePeriod=30 Feb 19 21:23:30 crc kubenswrapper[4886]: I0219 21:23:30.747562 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="289abf37-876b-4ef4-8782-9749f978c46f" containerName="aodh-notifier" containerID="cri-o://9c0678faa3269b581c0a73fd70eafd59e53b5b6dcf68b9c07423c098d2dd578e" gracePeriod=30 Feb 19 21:23:30 crc kubenswrapper[4886]: I0219 21:23:30.747541 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="289abf37-876b-4ef4-8782-9749f978c46f" containerName="aodh-listener" containerID="cri-o://1be42c2a3daa8ceb41689ab5d9e3a69bd9fea91cc818e0fe2fe86e66bf67768a" gracePeriod=30 Feb 19 21:23:30 crc kubenswrapper[4886]: I0219 21:23:30.747623 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="289abf37-876b-4ef4-8782-9749f978c46f" containerName="aodh-evaluator" containerID="cri-o://7a5caf783c735afe9baa3d021033615037568552f2d2a8bfa85248380c696ba9" gracePeriod=30 Feb 19 21:23:30 crc kubenswrapper[4886]: I0219 21:23:30.774452 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.004059171 podStartE2EDuration="17.774426392s" podCreationTimestamp="2026-02-19 21:23:13 +0000 UTC" firstStartedPulling="2026-02-19 21:23:14.737044549 +0000 UTC m=+1425.364887589" lastFinishedPulling="2026-02-19 21:23:29.50741173 +0000 UTC m=+1440.135254810" observedRunningTime="2026-02-19 21:23:30.768983248 +0000 UTC m=+1441.396826288" watchObservedRunningTime="2026-02-19 21:23:30.774426392 +0000 UTC m=+1441.402269452" Feb 19 21:23:31 crc kubenswrapper[4886]: I0219 21:23:31.052648 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 21:23:31 crc kubenswrapper[4886]: I0219 21:23:31.791808 4886 generic.go:334] "Generic (PLEG): container finished" podID="289abf37-876b-4ef4-8782-9749f978c46f" containerID="7a5caf783c735afe9baa3d021033615037568552f2d2a8bfa85248380c696ba9" exitCode=0 Feb 19 21:23:31 crc kubenswrapper[4886]: I0219 21:23:31.792473 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"289abf37-876b-4ef4-8782-9749f978c46f","Type":"ContainerDied","Data":"7a5caf783c735afe9baa3d021033615037568552f2d2a8bfa85248380c696ba9"} Feb 19 21:23:31 crc kubenswrapper[4886]: I0219 21:23:31.795085 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"289abf37-876b-4ef4-8782-9749f978c46f","Type":"ContainerDied","Data":"b6e5babe4369fa8f4e9c46a65995605769fd12d2b75c1be9e8f74cb41f0a1923"} Feb 19 21:23:31 crc kubenswrapper[4886]: I0219 21:23:31.794952 4886 generic.go:334] "Generic (PLEG): container finished" podID="289abf37-876b-4ef4-8782-9749f978c46f" containerID="b6e5babe4369fa8f4e9c46a65995605769fd12d2b75c1be9e8f74cb41f0a1923" exitCode=0 Feb 19 21:23:31 crc kubenswrapper[4886]: I0219 21:23:31.810454 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30b792c2-edf7-43c5-9627-fb270dbdefc1","Type":"ContainerStarted","Data":"01df94fafdba17c14ef43d5b6b4162bace966f37b2c02a045d2609dbb3c4fca1"} Feb 19 21:23:32 crc kubenswrapper[4886]: I0219 21:23:32.824545 4886 generic.go:334] "Generic (PLEG): container finished" podID="9dd3f102-b16f-4118-a32e-64eda5ae8047" containerID="06077776d05e160695a0e955fdec2b08900c1467c2e40c59483a3e97149a8fb0" exitCode=0 Feb 19 21:23:32 crc kubenswrapper[4886]: I0219 21:23:32.824602 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9c97q" event={"ID":"9dd3f102-b16f-4118-a32e-64eda5ae8047","Type":"ContainerDied","Data":"06077776d05e160695a0e955fdec2b08900c1467c2e40c59483a3e97149a8fb0"} Feb 19 21:23:32 crc kubenswrapper[4886]: I0219 21:23:32.897423 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 21:23:32 crc kubenswrapper[4886]: I0219 21:23:32.899582 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 21:23:33 crc kubenswrapper[4886]: I0219 21:23:33.838207 4886 generic.go:334] "Generic (PLEG): container finished" podID="289abf37-876b-4ef4-8782-9749f978c46f" containerID="9c0678faa3269b581c0a73fd70eafd59e53b5b6dcf68b9c07423c098d2dd578e" exitCode=0 Feb 19 21:23:33 crc kubenswrapper[4886]: I0219 21:23:33.838321 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"289abf37-876b-4ef4-8782-9749f978c46f","Type":"ContainerDied","Data":"9c0678faa3269b581c0a73fd70eafd59e53b5b6dcf68b9c07423c098d2dd578e"} Feb 19 21:23:33 crc kubenswrapper[4886]: I0219 21:23:33.909524 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2fe39c39-ee1f-49c5-bd83-a984ee05475d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.248:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 21:23:33 crc kubenswrapper[4886]: I0219 21:23:33.909968 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2fe39c39-ee1f-49c5-bd83-a984ee05475d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.248:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 21:23:34 crc kubenswrapper[4886]: I0219 21:23:34.278290 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9c97q" Feb 19 21:23:34 crc kubenswrapper[4886]: I0219 21:23:34.451470 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9dd3f102-b16f-4118-a32e-64eda5ae8047-scripts\") pod \"9dd3f102-b16f-4118-a32e-64eda5ae8047\" (UID: \"9dd3f102-b16f-4118-a32e-64eda5ae8047\") " Feb 19 21:23:34 crc kubenswrapper[4886]: I0219 21:23:34.451550 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd3f102-b16f-4118-a32e-64eda5ae8047-combined-ca-bundle\") pod \"9dd3f102-b16f-4118-a32e-64eda5ae8047\" (UID: \"9dd3f102-b16f-4118-a32e-64eda5ae8047\") " Feb 19 21:23:34 crc kubenswrapper[4886]: I0219 21:23:34.451622 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dd3f102-b16f-4118-a32e-64eda5ae8047-config-data\") pod \"9dd3f102-b16f-4118-a32e-64eda5ae8047\" (UID: \"9dd3f102-b16f-4118-a32e-64eda5ae8047\") " Feb 19 21:23:34 crc kubenswrapper[4886]: I0219 21:23:34.451828 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2nlc\" (UniqueName: \"kubernetes.io/projected/9dd3f102-b16f-4118-a32e-64eda5ae8047-kube-api-access-j2nlc\") pod \"9dd3f102-b16f-4118-a32e-64eda5ae8047\" (UID: \"9dd3f102-b16f-4118-a32e-64eda5ae8047\") " Feb 19 21:23:34 crc kubenswrapper[4886]: I0219 21:23:34.460432 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dd3f102-b16f-4118-a32e-64eda5ae8047-scripts" (OuterVolumeSpecName: "scripts") pod "9dd3f102-b16f-4118-a32e-64eda5ae8047" (UID: "9dd3f102-b16f-4118-a32e-64eda5ae8047"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:34 crc kubenswrapper[4886]: I0219 21:23:34.460509 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dd3f102-b16f-4118-a32e-64eda5ae8047-kube-api-access-j2nlc" (OuterVolumeSpecName: "kube-api-access-j2nlc") pod "9dd3f102-b16f-4118-a32e-64eda5ae8047" (UID: "9dd3f102-b16f-4118-a32e-64eda5ae8047"). InnerVolumeSpecName "kube-api-access-j2nlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:23:34 crc kubenswrapper[4886]: I0219 21:23:34.495811 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dd3f102-b16f-4118-a32e-64eda5ae8047-config-data" (OuterVolumeSpecName: "config-data") pod "9dd3f102-b16f-4118-a32e-64eda5ae8047" (UID: "9dd3f102-b16f-4118-a32e-64eda5ae8047"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:34 crc kubenswrapper[4886]: I0219 21:23:34.511532 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dd3f102-b16f-4118-a32e-64eda5ae8047-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9dd3f102-b16f-4118-a32e-64eda5ae8047" (UID: "9dd3f102-b16f-4118-a32e-64eda5ae8047"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:34 crc kubenswrapper[4886]: I0219 21:23:34.555038 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9dd3f102-b16f-4118-a32e-64eda5ae8047-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:34 crc kubenswrapper[4886]: I0219 21:23:34.555097 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd3f102-b16f-4118-a32e-64eda5ae8047-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:34 crc kubenswrapper[4886]: I0219 21:23:34.555114 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dd3f102-b16f-4118-a32e-64eda5ae8047-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:34 crc kubenswrapper[4886]: I0219 21:23:34.555126 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2nlc\" (UniqueName: \"kubernetes.io/projected/9dd3f102-b16f-4118-a32e-64eda5ae8047-kube-api-access-j2nlc\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:34 crc kubenswrapper[4886]: I0219 21:23:34.856935 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30b792c2-edf7-43c5-9627-fb270dbdefc1","Type":"ContainerStarted","Data":"d11b337b0b2a106194330e617811ee22ee61e05ed4829465596ccfdaccd7a9a1"} Feb 19 21:23:34 crc kubenswrapper[4886]: I0219 21:23:34.857485 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 21:23:34 crc kubenswrapper[4886]: I0219 21:23:34.870545 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9c97q" event={"ID":"9dd3f102-b16f-4118-a32e-64eda5ae8047","Type":"ContainerDied","Data":"03457f8c66730aa765c8b6a11716fd79660351e177e6a709ab2da117cefb5fce"} Feb 19 21:23:34 crc kubenswrapper[4886]: I0219 21:23:34.870587 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03457f8c66730aa765c8b6a11716fd79660351e177e6a709ab2da117cefb5fce" Feb 19 21:23:34 crc kubenswrapper[4886]: I0219 21:23:34.870671 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9c97q" Feb 19 21:23:34 crc kubenswrapper[4886]: I0219 21:23:34.907802 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.97718176 podStartE2EDuration="14.907783707s" podCreationTimestamp="2026-02-19 21:23:20 +0000 UTC" firstStartedPulling="2026-02-19 21:23:22.62917163 +0000 UTC m=+1433.257014680" lastFinishedPulling="2026-02-19 21:23:33.559773587 +0000 UTC m=+1444.187616627" observedRunningTime="2026-02-19 21:23:34.896685323 +0000 UTC m=+1445.524528373" watchObservedRunningTime="2026-02-19 21:23:34.907783707 +0000 UTC m=+1445.535626757" Feb 19 21:23:34 crc kubenswrapper[4886]: I0219 21:23:34.935370 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 21:23:34 crc kubenswrapper[4886]: E0219 21:23:34.936057 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dd3f102-b16f-4118-a32e-64eda5ae8047" containerName="nova-cell1-conductor-db-sync" Feb 19 21:23:34 crc kubenswrapper[4886]: I0219 21:23:34.936084 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dd3f102-b16f-4118-a32e-64eda5ae8047" containerName="nova-cell1-conductor-db-sync" Feb 19 21:23:34 crc kubenswrapper[4886]: I0219 21:23:34.936410 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dd3f102-b16f-4118-a32e-64eda5ae8047" containerName="nova-cell1-conductor-db-sync" Feb 19 21:23:34 crc kubenswrapper[4886]: I0219 21:23:34.937246 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 21:23:34 crc kubenswrapper[4886]: I0219 21:23:34.942627 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 21:23:34 crc kubenswrapper[4886]: I0219 21:23:34.950356 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 21:23:35 crc kubenswrapper[4886]: I0219 21:23:35.065250 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b321f6b-f143-4e4e-9ed2-678afa918fe8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1b321f6b-f143-4e4e-9ed2-678afa918fe8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 21:23:35 crc kubenswrapper[4886]: I0219 21:23:35.065812 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg4pp\" (UniqueName: \"kubernetes.io/projected/1b321f6b-f143-4e4e-9ed2-678afa918fe8-kube-api-access-qg4pp\") pod \"nova-cell1-conductor-0\" (UID: \"1b321f6b-f143-4e4e-9ed2-678afa918fe8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 21:23:35 crc kubenswrapper[4886]: I0219 21:23:35.065991 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b321f6b-f143-4e4e-9ed2-678afa918fe8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1b321f6b-f143-4e4e-9ed2-678afa918fe8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 21:23:35 crc kubenswrapper[4886]: I0219 21:23:35.167413 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg4pp\" (UniqueName: \"kubernetes.io/projected/1b321f6b-f143-4e4e-9ed2-678afa918fe8-kube-api-access-qg4pp\") pod \"nova-cell1-conductor-0\" (UID: \"1b321f6b-f143-4e4e-9ed2-678afa918fe8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 21:23:35 crc kubenswrapper[4886]: I0219 21:23:35.167525 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b321f6b-f143-4e4e-9ed2-678afa918fe8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1b321f6b-f143-4e4e-9ed2-678afa918fe8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 21:23:35 crc kubenswrapper[4886]: I0219 21:23:35.167578 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b321f6b-f143-4e4e-9ed2-678afa918fe8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1b321f6b-f143-4e4e-9ed2-678afa918fe8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 21:23:35 crc kubenswrapper[4886]: I0219 21:23:35.174557 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b321f6b-f143-4e4e-9ed2-678afa918fe8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1b321f6b-f143-4e4e-9ed2-678afa918fe8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 21:23:35 crc kubenswrapper[4886]: I0219 21:23:35.175158 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b321f6b-f143-4e4e-9ed2-678afa918fe8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1b321f6b-f143-4e4e-9ed2-678afa918fe8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 21:23:35 crc kubenswrapper[4886]: I0219 21:23:35.189817 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg4pp\" (UniqueName: \"kubernetes.io/projected/1b321f6b-f143-4e4e-9ed2-678afa918fe8-kube-api-access-qg4pp\") pod \"nova-cell1-conductor-0\" (UID: \"1b321f6b-f143-4e4e-9ed2-678afa918fe8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 21:23:35 crc kubenswrapper[4886]: I0219 21:23:35.260733 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 21:23:35 crc kubenswrapper[4886]: I0219 21:23:35.821310 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 21:23:35 crc kubenswrapper[4886]: I0219 21:23:35.894611 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1b321f6b-f143-4e4e-9ed2-678afa918fe8","Type":"ContainerStarted","Data":"63b324b8ac6a9a14a55ec23994c864bf2a0d27a1ed570f9a6e5f19b5f0ed206e"} Feb 19 21:23:36 crc kubenswrapper[4886]: I0219 21:23:36.053334 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 21:23:36 crc kubenswrapper[4886]: I0219 21:23:36.143325 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 21:23:36 crc kubenswrapper[4886]: I0219 21:23:36.386251 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 21:23:36 crc kubenswrapper[4886]: I0219 21:23:36.386808 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 21:23:36 crc kubenswrapper[4886]: I0219 21:23:36.906052 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1b321f6b-f143-4e4e-9ed2-678afa918fe8","Type":"ContainerStarted","Data":"d818f40b3b063ae7ebd7550d16d7fcdb3aac3bc16c307d870e99fe4d366fe3d0"} Feb 19 21:23:36 crc kubenswrapper[4886]: I0219 21:23:36.908058 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 19 21:23:36 crc kubenswrapper[4886]: I0219 21:23:36.938387 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.938364499 podStartE2EDuration="2.938364499s" podCreationTimestamp="2026-02-19 21:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:23:36.929137252 +0000 UTC m=+1447.556980302" watchObservedRunningTime="2026-02-19 21:23:36.938364499 +0000 UTC m=+1447.566207549" Feb 19 21:23:36 crc kubenswrapper[4886]: I0219 21:23:36.940792 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 21:23:37 crc kubenswrapper[4886]: I0219 21:23:37.469580 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9902d175-54e2-470a-b837-b69acf4393d8" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.250:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 21:23:37 crc kubenswrapper[4886]: I0219 21:23:37.469581 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9902d175-54e2-470a-b837-b69acf4393d8" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.250:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 21:23:42 crc kubenswrapper[4886]: I0219 21:23:42.904367 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 21:23:42 crc kubenswrapper[4886]: I0219 21:23:42.911608 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 21:23:42 crc kubenswrapper[4886]: I0219 21:23:42.913145 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 21:23:42 crc kubenswrapper[4886]: I0219 21:23:42.972905 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 21:23:44 crc kubenswrapper[4886]: I0219 21:23:44.770595 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:44 crc kubenswrapper[4886]: I0219 21:23:44.905083 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4htvw\" (UniqueName: \"kubernetes.io/projected/1e308384-cf87-4aaa-8d73-f35645e20d34-kube-api-access-4htvw\") pod \"1e308384-cf87-4aaa-8d73-f35645e20d34\" (UID: \"1e308384-cf87-4aaa-8d73-f35645e20d34\") " Feb 19 21:23:44 crc kubenswrapper[4886]: I0219 21:23:44.905321 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e308384-cf87-4aaa-8d73-f35645e20d34-combined-ca-bundle\") pod \"1e308384-cf87-4aaa-8d73-f35645e20d34\" (UID: \"1e308384-cf87-4aaa-8d73-f35645e20d34\") " Feb 19 21:23:44 crc kubenswrapper[4886]: I0219 21:23:44.905499 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e308384-cf87-4aaa-8d73-f35645e20d34-config-data\") pod \"1e308384-cf87-4aaa-8d73-f35645e20d34\" (UID: \"1e308384-cf87-4aaa-8d73-f35645e20d34\") " Feb 19 21:23:44 crc kubenswrapper[4886]: I0219 21:23:44.915537 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e308384-cf87-4aaa-8d73-f35645e20d34-kube-api-access-4htvw" (OuterVolumeSpecName: "kube-api-access-4htvw") pod "1e308384-cf87-4aaa-8d73-f35645e20d34" (UID: "1e308384-cf87-4aaa-8d73-f35645e20d34"). InnerVolumeSpecName "kube-api-access-4htvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:23:44 crc kubenswrapper[4886]: I0219 21:23:44.941200 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e308384-cf87-4aaa-8d73-f35645e20d34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e308384-cf87-4aaa-8d73-f35645e20d34" (UID: "1e308384-cf87-4aaa-8d73-f35645e20d34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:44 crc kubenswrapper[4886]: I0219 21:23:44.944801 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e308384-cf87-4aaa-8d73-f35645e20d34-config-data" (OuterVolumeSpecName: "config-data") pod "1e308384-cf87-4aaa-8d73-f35645e20d34" (UID: "1e308384-cf87-4aaa-8d73-f35645e20d34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:44 crc kubenswrapper[4886]: I0219 21:23:44.995009 4886 generic.go:334] "Generic (PLEG): container finished" podID="1e308384-cf87-4aaa-8d73-f35645e20d34" containerID="92188cfe74b7f7f6cb4d9947cd0922012d2ecb18f5fc9f5fda34360dd72dd38c" exitCode=137 Feb 19 21:23:44 crc kubenswrapper[4886]: I0219 21:23:44.995103 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1e308384-cf87-4aaa-8d73-f35645e20d34","Type":"ContainerDied","Data":"92188cfe74b7f7f6cb4d9947cd0922012d2ecb18f5fc9f5fda34360dd72dd38c"} Feb 19 21:23:44 crc kubenswrapper[4886]: I0219 21:23:44.995143 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1e308384-cf87-4aaa-8d73-f35645e20d34","Type":"ContainerDied","Data":"448a8d048ad3641d18ef90113e9512126fa1309a95cd97231f025e755b944312"} Feb 19 21:23:44 crc kubenswrapper[4886]: I0219 21:23:44.995161 4886 scope.go:117] "RemoveContainer" containerID="92188cfe74b7f7f6cb4d9947cd0922012d2ecb18f5fc9f5fda34360dd72dd38c" Feb 19 21:23:44 crc kubenswrapper[4886]: I0219 21:23:44.995467 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.009551 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e308384-cf87-4aaa-8d73-f35645e20d34-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.009593 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4htvw\" (UniqueName: \"kubernetes.io/projected/1e308384-cf87-4aaa-8d73-f35645e20d34-kube-api-access-4htvw\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.009608 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e308384-cf87-4aaa-8d73-f35645e20d34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.043322 4886 scope.go:117] "RemoveContainer" containerID="92188cfe74b7f7f6cb4d9947cd0922012d2ecb18f5fc9f5fda34360dd72dd38c" Feb 19 21:23:45 crc kubenswrapper[4886]: E0219 21:23:45.046764 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92188cfe74b7f7f6cb4d9947cd0922012d2ecb18f5fc9f5fda34360dd72dd38c\": container with ID starting with 92188cfe74b7f7f6cb4d9947cd0922012d2ecb18f5fc9f5fda34360dd72dd38c not found: ID does not exist" containerID="92188cfe74b7f7f6cb4d9947cd0922012d2ecb18f5fc9f5fda34360dd72dd38c" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.046817 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92188cfe74b7f7f6cb4d9947cd0922012d2ecb18f5fc9f5fda34360dd72dd38c"} err="failed to get container status \"92188cfe74b7f7f6cb4d9947cd0922012d2ecb18f5fc9f5fda34360dd72dd38c\": rpc error: code = NotFound desc = could not find container \"92188cfe74b7f7f6cb4d9947cd0922012d2ecb18f5fc9f5fda34360dd72dd38c\": container with ID starting with 92188cfe74b7f7f6cb4d9947cd0922012d2ecb18f5fc9f5fda34360dd72dd38c not found: ID does not exist" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.052373 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.064647 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.079310 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 21:23:45 crc kubenswrapper[4886]: E0219 21:23:45.080086 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e308384-cf87-4aaa-8d73-f35645e20d34" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.080117 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e308384-cf87-4aaa-8d73-f35645e20d34" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.080553 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e308384-cf87-4aaa-8d73-f35645e20d34" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.081987 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.083582 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.084915 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.084954 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.090253 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.213159 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1669b6bb-5095-4580-a387-4a73045e5196-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1669b6bb-5095-4580-a387-4a73045e5196\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.213851 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1669b6bb-5095-4580-a387-4a73045e5196-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1669b6bb-5095-4580-a387-4a73045e5196\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.214033 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1669b6bb-5095-4580-a387-4a73045e5196-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1669b6bb-5095-4580-a387-4a73045e5196\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.214317 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96vpz\" (UniqueName: \"kubernetes.io/projected/1669b6bb-5095-4580-a387-4a73045e5196-kube-api-access-96vpz\") pod \"nova-cell1-novncproxy-0\" (UID: \"1669b6bb-5095-4580-a387-4a73045e5196\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.214553 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1669b6bb-5095-4580-a387-4a73045e5196-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1669b6bb-5095-4580-a387-4a73045e5196\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.298546 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.317112 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1669b6bb-5095-4580-a387-4a73045e5196-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1669b6bb-5095-4580-a387-4a73045e5196\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.317167 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1669b6bb-5095-4580-a387-4a73045e5196-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1669b6bb-5095-4580-a387-4a73045e5196\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.317216 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1669b6bb-5095-4580-a387-4a73045e5196-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1669b6bb-5095-4580-a387-4a73045e5196\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.317363 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96vpz\" (UniqueName: \"kubernetes.io/projected/1669b6bb-5095-4580-a387-4a73045e5196-kube-api-access-96vpz\") pod \"nova-cell1-novncproxy-0\" (UID: \"1669b6bb-5095-4580-a387-4a73045e5196\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.317508 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1669b6bb-5095-4580-a387-4a73045e5196-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1669b6bb-5095-4580-a387-4a73045e5196\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.323934 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1669b6bb-5095-4580-a387-4a73045e5196-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1669b6bb-5095-4580-a387-4a73045e5196\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.325101 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1669b6bb-5095-4580-a387-4a73045e5196-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1669b6bb-5095-4580-a387-4a73045e5196\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.325877 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1669b6bb-5095-4580-a387-4a73045e5196-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1669b6bb-5095-4580-a387-4a73045e5196\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.330118 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1669b6bb-5095-4580-a387-4a73045e5196-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1669b6bb-5095-4580-a387-4a73045e5196\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.389673 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96vpz\" (UniqueName: \"kubernetes.io/projected/1669b6bb-5095-4580-a387-4a73045e5196-kube-api-access-96vpz\") pod \"nova-cell1-novncproxy-0\" (UID: \"1669b6bb-5095-4580-a387-4a73045e5196\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.414750 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:45 crc kubenswrapper[4886]: W0219 21:23:45.920643 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1669b6bb_5095_4580_a387_4a73045e5196.slice/crio-9100cdcbdd4c0276d502ca67d2cf9e69d595c3252d237e129940088cbf7e72a6 WatchSource:0}: Error finding container 9100cdcbdd4c0276d502ca67d2cf9e69d595c3252d237e129940088cbf7e72a6: Status 404 returned error can't find the container with id 9100cdcbdd4c0276d502ca67d2cf9e69d595c3252d237e129940088cbf7e72a6 Feb 19 21:23:45 crc kubenswrapper[4886]: I0219 21:23:45.924625 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 21:23:46 crc kubenswrapper[4886]: I0219 21:23:46.006211 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1669b6bb-5095-4580-a387-4a73045e5196","Type":"ContainerStarted","Data":"9100cdcbdd4c0276d502ca67d2cf9e69d595c3252d237e129940088cbf7e72a6"} Feb 19 21:23:46 crc kubenswrapper[4886]: I0219 21:23:46.390876 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 21:23:46 crc kubenswrapper[4886]: I0219 21:23:46.391975 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 21:23:46 crc kubenswrapper[4886]: I0219 21:23:46.392134 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 21:23:46 crc kubenswrapper[4886]: I0219 21:23:46.395658 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 21:23:46 crc kubenswrapper[4886]: I0219 21:23:46.615523 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e308384-cf87-4aaa-8d73-f35645e20d34" path="/var/lib/kubelet/pods/1e308384-cf87-4aaa-8d73-f35645e20d34/volumes" Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.018311 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1669b6bb-5095-4580-a387-4a73045e5196","Type":"ContainerStarted","Data":"f83339dcc42d7e98f6dce2134aeac963fb617910e80d9c439917fa0d2630812a"} Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.019020 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.040906 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.056792 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.056773296 podStartE2EDuration="2.056773296s" podCreationTimestamp="2026-02-19 21:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:23:47.047882257 +0000 UTC m=+1457.675725307" watchObservedRunningTime="2026-02-19 21:23:47.056773296 +0000 UTC m=+1457.684616336" Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.244151 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-77vlz"] Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.246004 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.301429 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-77vlz"] Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.373575 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-77vlz\" (UID: \"2442d861-718b-433c-9d24-06d9565234c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.373698 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-77vlz\" (UID: \"2442d861-718b-433c-9d24-06d9565234c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.373757 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-config\") pod \"dnsmasq-dns-6b7bbf7cf9-77vlz\" (UID: \"2442d861-718b-433c-9d24-06d9565234c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.373784 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-77vlz\" (UID: \"2442d861-718b-433c-9d24-06d9565234c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.373811 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-77vlz\" (UID: \"2442d861-718b-433c-9d24-06d9565234c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.373839 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd62c\" (UniqueName: \"kubernetes.io/projected/2442d861-718b-433c-9d24-06d9565234c8-kube-api-access-kd62c\") pod \"dnsmasq-dns-6b7bbf7cf9-77vlz\" (UID: \"2442d861-718b-433c-9d24-06d9565234c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.476493 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-77vlz\" (UID: \"2442d861-718b-433c-9d24-06d9565234c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.476637 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-config\") pod \"dnsmasq-dns-6b7bbf7cf9-77vlz\" (UID: \"2442d861-718b-433c-9d24-06d9565234c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.476687 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-77vlz\" (UID: \"2442d861-718b-433c-9d24-06d9565234c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.476728 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-77vlz\" (UID: \"2442d861-718b-433c-9d24-06d9565234c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.476771 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd62c\" (UniqueName: \"kubernetes.io/projected/2442d861-718b-433c-9d24-06d9565234c8-kube-api-access-kd62c\") pod \"dnsmasq-dns-6b7bbf7cf9-77vlz\" (UID: \"2442d861-718b-433c-9d24-06d9565234c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.476886 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-77vlz\" (UID: \"2442d861-718b-433c-9d24-06d9565234c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.477547 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-77vlz\" (UID: \"2442d861-718b-433c-9d24-06d9565234c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.477669 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-77vlz\" (UID: \"2442d861-718b-433c-9d24-06d9565234c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.477699 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-77vlz\" (UID: \"2442d861-718b-433c-9d24-06d9565234c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.477792 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-config\") pod \"dnsmasq-dns-6b7bbf7cf9-77vlz\" (UID: \"2442d861-718b-433c-9d24-06d9565234c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.477894 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-77vlz\" (UID: \"2442d861-718b-433c-9d24-06d9565234c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.500380 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd62c\" (UniqueName: \"kubernetes.io/projected/2442d861-718b-433c-9d24-06d9565234c8-kube-api-access-kd62c\") pod \"dnsmasq-dns-6b7bbf7cf9-77vlz\" (UID: \"2442d861-718b-433c-9d24-06d9565234c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" Feb 19 21:23:47 crc kubenswrapper[4886]: I0219 21:23:47.568424 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" Feb 19 21:23:48 crc kubenswrapper[4886]: I0219 21:23:48.067582 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-77vlz"] Feb 19 21:23:48 crc kubenswrapper[4886]: I0219 21:23:48.326359 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:23:48 crc kubenswrapper[4886]: I0219 21:23:48.326638 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:23:49 crc kubenswrapper[4886]: I0219 21:23:49.038514 4886 generic.go:334] "Generic (PLEG): container finished" podID="2442d861-718b-433c-9d24-06d9565234c8" containerID="007d0e2f15a9fbe11361689af48809c5d0732dba9d72a2ca50346dc5b50c6226" exitCode=0 Feb 19 21:23:49 crc kubenswrapper[4886]: I0219 21:23:49.038567 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" event={"ID":"2442d861-718b-433c-9d24-06d9565234c8","Type":"ContainerDied","Data":"007d0e2f15a9fbe11361689af48809c5d0732dba9d72a2ca50346dc5b50c6226"} Feb 19 21:23:49 crc kubenswrapper[4886]: I0219 21:23:49.039158 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" event={"ID":"2442d861-718b-433c-9d24-06d9565234c8","Type":"ContainerStarted","Data":"ec04765fdd7257d6e0d2ccd18c18864257f9732d21f25317ddd7483d99583c93"} Feb 19 21:23:49 crc kubenswrapper[4886]: I0219 21:23:49.812944 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 21:23:50 crc kubenswrapper[4886]: I0219 21:23:50.056474 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" event={"ID":"2442d861-718b-433c-9d24-06d9565234c8","Type":"ContainerStarted","Data":"a60577fd198a8e2f4abc54a65d8ee321433f9c7eda1eeee82d4b48c90e3bddac"} Feb 19 21:23:50 crc kubenswrapper[4886]: I0219 21:23:50.056988 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9902d175-54e2-470a-b837-b69acf4393d8" containerName="nova-api-log" containerID="cri-o://f4f3c23d68eaaad9cb395300da7bed6e5ddeeaeccdfe8ce698fda819ea141e67" gracePeriod=30 Feb 19 21:23:50 crc kubenswrapper[4886]: I0219 21:23:50.057000 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" Feb 19 21:23:50 crc kubenswrapper[4886]: I0219 21:23:50.057081 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9902d175-54e2-470a-b837-b69acf4393d8" containerName="nova-api-api" containerID="cri-o://cab1c5543420d3e7b9d6e28ba128ec3fa1923130cec862b585fd2e1b63897f26" gracePeriod=30 Feb 19 21:23:50 crc kubenswrapper[4886]: I0219 21:23:50.095756 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" podStartSLOduration=3.095725693 podStartE2EDuration="3.095725693s" podCreationTimestamp="2026-02-19 21:23:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:23:50.086562577 +0000 UTC m=+1460.714405627" watchObservedRunningTime="2026-02-19 21:23:50.095725693 +0000 UTC m=+1460.723568763" Feb 19 21:23:50 crc kubenswrapper[4886]: I0219 21:23:50.392105 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:23:50 crc kubenswrapper[4886]: I0219 21:23:50.392457 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30b792c2-edf7-43c5-9627-fb270dbdefc1" containerName="ceilometer-central-agent" containerID="cri-o://b00819b92c5713517370d9fe5ebe37291e0a0e660cfdd39dc9c75293aeb25446" gracePeriod=30 Feb 19 21:23:50 crc kubenswrapper[4886]: I0219 21:23:50.392531 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30b792c2-edf7-43c5-9627-fb270dbdefc1" containerName="proxy-httpd" containerID="cri-o://d11b337b0b2a106194330e617811ee22ee61e05ed4829465596ccfdaccd7a9a1" gracePeriod=30 Feb 19 21:23:50 crc kubenswrapper[4886]: I0219 21:23:50.392574 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30b792c2-edf7-43c5-9627-fb270dbdefc1" containerName="ceilometer-notification-agent" containerID="cri-o://a56ba521409ae9f616f127e0a171fee74b79473b3d58d7e94b781661f8b242b3" gracePeriod=30 Feb 19 21:23:50 crc kubenswrapper[4886]: I0219 21:23:50.392560 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30b792c2-edf7-43c5-9627-fb270dbdefc1" containerName="sg-core" containerID="cri-o://01df94fafdba17c14ef43d5b6b4162bace966f37b2c02a045d2609dbb3c4fca1" gracePeriod=30 Feb 19 21:23:50 crc kubenswrapper[4886]: I0219 21:23:50.417713 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:50 crc kubenswrapper[4886]: I0219 21:23:50.495090 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="30b792c2-edf7-43c5-9627-fb270dbdefc1" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.247:3000/\": read tcp 10.217.0.2:49116->10.217.0.247:3000: read: connection reset by peer" Feb 19 21:23:50 crc kubenswrapper[4886]: I0219 21:23:50.895304 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="30b792c2-edf7-43c5-9627-fb270dbdefc1" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.247:3000/\": dial tcp 10.217.0.247:3000: connect: connection refused" Feb 19 21:23:51 crc kubenswrapper[4886]: I0219 21:23:51.077658 4886 generic.go:334] "Generic (PLEG): container finished" podID="30b792c2-edf7-43c5-9627-fb270dbdefc1" containerID="d11b337b0b2a106194330e617811ee22ee61e05ed4829465596ccfdaccd7a9a1" exitCode=0 Feb 19 21:23:51 crc kubenswrapper[4886]: I0219 21:23:51.078027 4886 generic.go:334] "Generic (PLEG): container finished" podID="30b792c2-edf7-43c5-9627-fb270dbdefc1" containerID="01df94fafdba17c14ef43d5b6b4162bace966f37b2c02a045d2609dbb3c4fca1" exitCode=2 Feb 19 21:23:51 crc kubenswrapper[4886]: I0219 21:23:51.078042 4886 generic.go:334] "Generic (PLEG): container finished" podID="30b792c2-edf7-43c5-9627-fb270dbdefc1" containerID="b00819b92c5713517370d9fe5ebe37291e0a0e660cfdd39dc9c75293aeb25446" exitCode=0 Feb 19 21:23:51 crc kubenswrapper[4886]: I0219 21:23:51.077738 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30b792c2-edf7-43c5-9627-fb270dbdefc1","Type":"ContainerDied","Data":"d11b337b0b2a106194330e617811ee22ee61e05ed4829465596ccfdaccd7a9a1"} Feb 19 21:23:51 crc kubenswrapper[4886]: I0219 21:23:51.078120 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30b792c2-edf7-43c5-9627-fb270dbdefc1","Type":"ContainerDied","Data":"01df94fafdba17c14ef43d5b6b4162bace966f37b2c02a045d2609dbb3c4fca1"} Feb 19 21:23:51 crc kubenswrapper[4886]: I0219 21:23:51.078140 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30b792c2-edf7-43c5-9627-fb270dbdefc1","Type":"ContainerDied","Data":"b00819b92c5713517370d9fe5ebe37291e0a0e660cfdd39dc9c75293aeb25446"} Feb 19 21:23:51 crc kubenswrapper[4886]: I0219 21:23:51.080405 4886 generic.go:334] "Generic (PLEG): container finished" podID="9902d175-54e2-470a-b837-b69acf4393d8" containerID="f4f3c23d68eaaad9cb395300da7bed6e5ddeeaeccdfe8ce698fda819ea141e67" exitCode=143 Feb 19 21:23:51 crc kubenswrapper[4886]: I0219 21:23:51.080509 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9902d175-54e2-470a-b837-b69acf4393d8","Type":"ContainerDied","Data":"f4f3c23d68eaaad9cb395300da7bed6e5ddeeaeccdfe8ce698fda819ea141e67"} Feb 19 21:23:54 crc kubenswrapper[4886]: I0219 21:23:54.126900 4886 generic.go:334] "Generic (PLEG): container finished" podID="9902d175-54e2-470a-b837-b69acf4393d8" containerID="cab1c5543420d3e7b9d6e28ba128ec3fa1923130cec862b585fd2e1b63897f26" exitCode=0 Feb 19 21:23:54 crc kubenswrapper[4886]: I0219 21:23:54.127058 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9902d175-54e2-470a-b837-b69acf4393d8","Type":"ContainerDied","Data":"cab1c5543420d3e7b9d6e28ba128ec3fa1923130cec862b585fd2e1b63897f26"} Feb 19 21:23:54 crc kubenswrapper[4886]: I0219 21:23:54.964337 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.085094 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9902d175-54e2-470a-b837-b69acf4393d8-logs\") pod \"9902d175-54e2-470a-b837-b69acf4393d8\" (UID: \"9902d175-54e2-470a-b837-b69acf4393d8\") " Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.085184 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f7pt\" (UniqueName: \"kubernetes.io/projected/9902d175-54e2-470a-b837-b69acf4393d8-kube-api-access-5f7pt\") pod \"9902d175-54e2-470a-b837-b69acf4393d8\" (UID: \"9902d175-54e2-470a-b837-b69acf4393d8\") " Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.085285 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9902d175-54e2-470a-b837-b69acf4393d8-combined-ca-bundle\") pod \"9902d175-54e2-470a-b837-b69acf4393d8\" (UID: \"9902d175-54e2-470a-b837-b69acf4393d8\") " Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.085559 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9902d175-54e2-470a-b837-b69acf4393d8-config-data\") pod \"9902d175-54e2-470a-b837-b69acf4393d8\" (UID: \"9902d175-54e2-470a-b837-b69acf4393d8\") " Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.086204 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9902d175-54e2-470a-b837-b69acf4393d8-logs" (OuterVolumeSpecName: "logs") pod "9902d175-54e2-470a-b837-b69acf4393d8" (UID: "9902d175-54e2-470a-b837-b69acf4393d8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.086610 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9902d175-54e2-470a-b837-b69acf4393d8-logs\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.094329 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9902d175-54e2-470a-b837-b69acf4393d8-kube-api-access-5f7pt" (OuterVolumeSpecName: "kube-api-access-5f7pt") pod "9902d175-54e2-470a-b837-b69acf4393d8" (UID: "9902d175-54e2-470a-b837-b69acf4393d8"). InnerVolumeSpecName "kube-api-access-5f7pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.123935 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9902d175-54e2-470a-b837-b69acf4393d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9902d175-54e2-470a-b837-b69acf4393d8" (UID: "9902d175-54e2-470a-b837-b69acf4393d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.131928 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9902d175-54e2-470a-b837-b69acf4393d8-config-data" (OuterVolumeSpecName: "config-data") pod "9902d175-54e2-470a-b837-b69acf4393d8" (UID: "9902d175-54e2-470a-b837-b69acf4393d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.141727 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9902d175-54e2-470a-b837-b69acf4393d8","Type":"ContainerDied","Data":"299a2736af6b1f8fc669a0493ea6f9d167ffb31b3b6208bdbedb0db74f93ae6b"} Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.141797 4886 scope.go:117] "RemoveContainer" containerID="cab1c5543420d3e7b9d6e28ba128ec3fa1923130cec862b585fd2e1b63897f26" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.142014 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.190295 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f7pt\" (UniqueName: \"kubernetes.io/projected/9902d175-54e2-470a-b837-b69acf4393d8-kube-api-access-5f7pt\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.190371 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9902d175-54e2-470a-b837-b69acf4393d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.190386 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9902d175-54e2-470a-b837-b69acf4393d8-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.246170 4886 scope.go:117] "RemoveContainer" containerID="f4f3c23d68eaaad9cb395300da7bed6e5ddeeaeccdfe8ce698fda819ea141e67" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.261180 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.310431 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.328601 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 21:23:55 crc kubenswrapper[4886]: E0219 21:23:55.329128 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9902d175-54e2-470a-b837-b69acf4393d8" containerName="nova-api-api" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.329144 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9902d175-54e2-470a-b837-b69acf4393d8" containerName="nova-api-api" Feb 19 21:23:55 crc kubenswrapper[4886]: E0219 21:23:55.329179 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9902d175-54e2-470a-b837-b69acf4393d8" containerName="nova-api-log" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.329186 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9902d175-54e2-470a-b837-b69acf4393d8" containerName="nova-api-log" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.330045 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="9902d175-54e2-470a-b837-b69acf4393d8" containerName="nova-api-api" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.330077 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="9902d175-54e2-470a-b837-b69acf4393d8" containerName="nova-api-log" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.331434 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.334580 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.334865 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.334983 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.383726 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.395679 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd8e5d10-ef29-4c16-9741-50b48418f573-public-tls-certs\") pod \"nova-api-0\" (UID: \"dd8e5d10-ef29-4c16-9741-50b48418f573\") " pod="openstack/nova-api-0" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.395931 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd8e5d10-ef29-4c16-9741-50b48418f573-logs\") pod \"nova-api-0\" (UID: \"dd8e5d10-ef29-4c16-9741-50b48418f573\") " pod="openstack/nova-api-0" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.396204 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5hsd\" (UniqueName: \"kubernetes.io/projected/dd8e5d10-ef29-4c16-9741-50b48418f573-kube-api-access-k5hsd\") pod \"nova-api-0\" (UID: \"dd8e5d10-ef29-4c16-9741-50b48418f573\") " pod="openstack/nova-api-0" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.396333 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd8e5d10-ef29-4c16-9741-50b48418f573-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dd8e5d10-ef29-4c16-9741-50b48418f573\") " pod="openstack/nova-api-0" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.396489 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd8e5d10-ef29-4c16-9741-50b48418f573-config-data\") pod \"nova-api-0\" (UID: \"dd8e5d10-ef29-4c16-9741-50b48418f573\") " pod="openstack/nova-api-0" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.396606 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd8e5d10-ef29-4c16-9741-50b48418f573-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dd8e5d10-ef29-4c16-9741-50b48418f573\") " pod="openstack/nova-api-0" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.415389 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.441806 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.499129 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5hsd\" (UniqueName: \"kubernetes.io/projected/dd8e5d10-ef29-4c16-9741-50b48418f573-kube-api-access-k5hsd\") pod \"nova-api-0\" (UID: \"dd8e5d10-ef29-4c16-9741-50b48418f573\") " pod="openstack/nova-api-0" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.499217 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd8e5d10-ef29-4c16-9741-50b48418f573-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dd8e5d10-ef29-4c16-9741-50b48418f573\") " pod="openstack/nova-api-0" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.499242 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd8e5d10-ef29-4c16-9741-50b48418f573-config-data\") pod \"nova-api-0\" (UID: \"dd8e5d10-ef29-4c16-9741-50b48418f573\") " pod="openstack/nova-api-0" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.499281 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd8e5d10-ef29-4c16-9741-50b48418f573-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dd8e5d10-ef29-4c16-9741-50b48418f573\") " pod="openstack/nova-api-0" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.499378 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd8e5d10-ef29-4c16-9741-50b48418f573-public-tls-certs\") pod \"nova-api-0\" (UID: \"dd8e5d10-ef29-4c16-9741-50b48418f573\") " pod="openstack/nova-api-0" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.499405 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd8e5d10-ef29-4c16-9741-50b48418f573-logs\") pod \"nova-api-0\" (UID: \"dd8e5d10-ef29-4c16-9741-50b48418f573\") " pod="openstack/nova-api-0" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.501179 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd8e5d10-ef29-4c16-9741-50b48418f573-logs\") pod \"nova-api-0\" (UID: \"dd8e5d10-ef29-4c16-9741-50b48418f573\") " pod="openstack/nova-api-0" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.507557 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd8e5d10-ef29-4c16-9741-50b48418f573-public-tls-certs\") pod \"nova-api-0\" (UID: \"dd8e5d10-ef29-4c16-9741-50b48418f573\") " pod="openstack/nova-api-0" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.507816 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd8e5d10-ef29-4c16-9741-50b48418f573-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dd8e5d10-ef29-4c16-9741-50b48418f573\") " pod="openstack/nova-api-0" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.523004 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd8e5d10-ef29-4c16-9741-50b48418f573-config-data\") pod \"nova-api-0\" (UID: \"dd8e5d10-ef29-4c16-9741-50b48418f573\") " pod="openstack/nova-api-0" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.525988 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd8e5d10-ef29-4c16-9741-50b48418f573-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dd8e5d10-ef29-4c16-9741-50b48418f573\") " pod="openstack/nova-api-0" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.532124 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5hsd\" (UniqueName: \"kubernetes.io/projected/dd8e5d10-ef29-4c16-9741-50b48418f573-kube-api-access-k5hsd\") pod \"nova-api-0\" (UID: \"dd8e5d10-ef29-4c16-9741-50b48418f573\") " pod="openstack/nova-api-0" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.665990 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 21:23:55 crc kubenswrapper[4886]: I0219 21:23:55.987998 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.114696 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b792c2-edf7-43c5-9627-fb270dbdefc1-combined-ca-bundle\") pod \"30b792c2-edf7-43c5-9627-fb270dbdefc1\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.114771 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30b792c2-edf7-43c5-9627-fb270dbdefc1-scripts\") pod \"30b792c2-edf7-43c5-9627-fb270dbdefc1\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.114841 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30b792c2-edf7-43c5-9627-fb270dbdefc1-sg-core-conf-yaml\") pod \"30b792c2-edf7-43c5-9627-fb270dbdefc1\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.114909 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30b792c2-edf7-43c5-9627-fb270dbdefc1-run-httpd\") pod \"30b792c2-edf7-43c5-9627-fb270dbdefc1\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.115082 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30b792c2-edf7-43c5-9627-fb270dbdefc1-log-httpd\") pod \"30b792c2-edf7-43c5-9627-fb270dbdefc1\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.115103 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q22fh\" (UniqueName: \"kubernetes.io/projected/30b792c2-edf7-43c5-9627-fb270dbdefc1-kube-api-access-q22fh\") pod \"30b792c2-edf7-43c5-9627-fb270dbdefc1\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.115245 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b792c2-edf7-43c5-9627-fb270dbdefc1-config-data\") pod \"30b792c2-edf7-43c5-9627-fb270dbdefc1\" (UID: \"30b792c2-edf7-43c5-9627-fb270dbdefc1\") " Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.118626 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30b792c2-edf7-43c5-9627-fb270dbdefc1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "30b792c2-edf7-43c5-9627-fb270dbdefc1" (UID: "30b792c2-edf7-43c5-9627-fb270dbdefc1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.118839 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30b792c2-edf7-43c5-9627-fb270dbdefc1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "30b792c2-edf7-43c5-9627-fb270dbdefc1" (UID: "30b792c2-edf7-43c5-9627-fb270dbdefc1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.136577 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30b792c2-edf7-43c5-9627-fb270dbdefc1-scripts" (OuterVolumeSpecName: "scripts") pod "30b792c2-edf7-43c5-9627-fb270dbdefc1" (UID: "30b792c2-edf7-43c5-9627-fb270dbdefc1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.139237 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30b792c2-edf7-43c5-9627-fb270dbdefc1-kube-api-access-q22fh" (OuterVolumeSpecName: "kube-api-access-q22fh") pod "30b792c2-edf7-43c5-9627-fb270dbdefc1" (UID: "30b792c2-edf7-43c5-9627-fb270dbdefc1"). InnerVolumeSpecName "kube-api-access-q22fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.157658 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30b792c2-edf7-43c5-9627-fb270dbdefc1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "30b792c2-edf7-43c5-9627-fb270dbdefc1" (UID: "30b792c2-edf7-43c5-9627-fb270dbdefc1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.169524 4886 generic.go:334] "Generic (PLEG): container finished" podID="30b792c2-edf7-43c5-9627-fb270dbdefc1" containerID="a56ba521409ae9f616f127e0a171fee74b79473b3d58d7e94b781661f8b242b3" exitCode=0 Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.171383 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.172070 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30b792c2-edf7-43c5-9627-fb270dbdefc1","Type":"ContainerDied","Data":"a56ba521409ae9f616f127e0a171fee74b79473b3d58d7e94b781661f8b242b3"} Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.172110 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30b792c2-edf7-43c5-9627-fb270dbdefc1","Type":"ContainerDied","Data":"6ae6fef2e65250b32f203317d700853e8e440afa84decf61dc3dff5313a41e50"} Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.172132 4886 scope.go:117] "RemoveContainer" containerID="d11b337b0b2a106194330e617811ee22ee61e05ed4829465596ccfdaccd7a9a1" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.199244 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.219527 4886 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30b792c2-edf7-43c5-9627-fb270dbdefc1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.219552 4886 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30b792c2-edf7-43c5-9627-fb270dbdefc1-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.219562 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q22fh\" (UniqueName: \"kubernetes.io/projected/30b792c2-edf7-43c5-9627-fb270dbdefc1-kube-api-access-q22fh\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.219571 4886 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30b792c2-edf7-43c5-9627-fb270dbdefc1-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.219579 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30b792c2-edf7-43c5-9627-fb270dbdefc1-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.304996 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.327364 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30b792c2-edf7-43c5-9627-fb270dbdefc1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30b792c2-edf7-43c5-9627-fb270dbdefc1" (UID: "30b792c2-edf7-43c5-9627-fb270dbdefc1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:56 crc kubenswrapper[4886]: W0219 21:23:56.332330 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd8e5d10_ef29_4c16_9741_50b48418f573.slice/crio-a71aa9ce6e88233fb3f3d639327443e46668495749b7f0fd0252decd70843848 WatchSource:0}: Error finding container a71aa9ce6e88233fb3f3d639327443e46668495749b7f0fd0252decd70843848: Status 404 returned error can't find the container with id a71aa9ce6e88233fb3f3d639327443e46668495749b7f0fd0252decd70843848 Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.339864 4886 scope.go:117] "RemoveContainer" containerID="01df94fafdba17c14ef43d5b6b4162bace966f37b2c02a045d2609dbb3c4fca1" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.366501 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-gfkv8"] Feb 19 21:23:56 crc kubenswrapper[4886]: E0219 21:23:56.367001 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30b792c2-edf7-43c5-9627-fb270dbdefc1" containerName="proxy-httpd" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.367014 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="30b792c2-edf7-43c5-9627-fb270dbdefc1" containerName="proxy-httpd" Feb 19 21:23:56 crc kubenswrapper[4886]: E0219 21:23:56.367042 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30b792c2-edf7-43c5-9627-fb270dbdefc1" containerName="ceilometer-central-agent" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.367049 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="30b792c2-edf7-43c5-9627-fb270dbdefc1" containerName="ceilometer-central-agent" Feb 19 21:23:56 crc kubenswrapper[4886]: E0219 21:23:56.367069 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30b792c2-edf7-43c5-9627-fb270dbdefc1" containerName="ceilometer-notification-agent" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.367075 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="30b792c2-edf7-43c5-9627-fb270dbdefc1" containerName="ceilometer-notification-agent" Feb 19 21:23:56 crc kubenswrapper[4886]: E0219 21:23:56.367089 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30b792c2-edf7-43c5-9627-fb270dbdefc1" containerName="sg-core" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.367094 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="30b792c2-edf7-43c5-9627-fb270dbdefc1" containerName="sg-core" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.367325 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="30b792c2-edf7-43c5-9627-fb270dbdefc1" containerName="ceilometer-central-agent" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.367346 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="30b792c2-edf7-43c5-9627-fb270dbdefc1" containerName="ceilometer-notification-agent" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.367354 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="30b792c2-edf7-43c5-9627-fb270dbdefc1" containerName="proxy-httpd" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.367364 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="30b792c2-edf7-43c5-9627-fb270dbdefc1" containerName="sg-core" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.368132 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gfkv8" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.371055 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.371359 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.378213 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-gfkv8"] Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.402458 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30b792c2-edf7-43c5-9627-fb270dbdefc1-config-data" (OuterVolumeSpecName: "config-data") pod "30b792c2-edf7-43c5-9627-fb270dbdefc1" (UID: "30b792c2-edf7-43c5-9627-fb270dbdefc1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.419616 4886 scope.go:117] "RemoveContainer" containerID="a56ba521409ae9f616f127e0a171fee74b79473b3d58d7e94b781661f8b242b3" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.424384 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8426eb13-8a64-41fd-a608-0fd4d138cca1-config-data\") pod \"nova-cell1-cell-mapping-gfkv8\" (UID: \"8426eb13-8a64-41fd-a608-0fd4d138cca1\") " pod="openstack/nova-cell1-cell-mapping-gfkv8" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.424428 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8426eb13-8a64-41fd-a608-0fd4d138cca1-scripts\") pod \"nova-cell1-cell-mapping-gfkv8\" (UID: \"8426eb13-8a64-41fd-a608-0fd4d138cca1\") " pod="openstack/nova-cell1-cell-mapping-gfkv8" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.424524 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mmg9\" (UniqueName: \"kubernetes.io/projected/8426eb13-8a64-41fd-a608-0fd4d138cca1-kube-api-access-6mmg9\") pod \"nova-cell1-cell-mapping-gfkv8\" (UID: \"8426eb13-8a64-41fd-a608-0fd4d138cca1\") " pod="openstack/nova-cell1-cell-mapping-gfkv8" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.424587 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8426eb13-8a64-41fd-a608-0fd4d138cca1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-gfkv8\" (UID: \"8426eb13-8a64-41fd-a608-0fd4d138cca1\") " pod="openstack/nova-cell1-cell-mapping-gfkv8" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.424678 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b792c2-edf7-43c5-9627-fb270dbdefc1-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.424693 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b792c2-edf7-43c5-9627-fb270dbdefc1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.450572 4886 scope.go:117] "RemoveContainer" containerID="b00819b92c5713517370d9fe5ebe37291e0a0e660cfdd39dc9c75293aeb25446" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.483222 4886 scope.go:117] "RemoveContainer" containerID="d11b337b0b2a106194330e617811ee22ee61e05ed4829465596ccfdaccd7a9a1" Feb 19 21:23:56 crc kubenswrapper[4886]: E0219 21:23:56.483666 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d11b337b0b2a106194330e617811ee22ee61e05ed4829465596ccfdaccd7a9a1\": container with ID starting with d11b337b0b2a106194330e617811ee22ee61e05ed4829465596ccfdaccd7a9a1 not found: ID does not exist" containerID="d11b337b0b2a106194330e617811ee22ee61e05ed4829465596ccfdaccd7a9a1" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.483696 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d11b337b0b2a106194330e617811ee22ee61e05ed4829465596ccfdaccd7a9a1"} err="failed to get container status \"d11b337b0b2a106194330e617811ee22ee61e05ed4829465596ccfdaccd7a9a1\": rpc error: code = NotFound desc = could not find container \"d11b337b0b2a106194330e617811ee22ee61e05ed4829465596ccfdaccd7a9a1\": container with ID starting with d11b337b0b2a106194330e617811ee22ee61e05ed4829465596ccfdaccd7a9a1 not found: ID does not exist" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.483716 4886 scope.go:117] "RemoveContainer" containerID="01df94fafdba17c14ef43d5b6b4162bace966f37b2c02a045d2609dbb3c4fca1" Feb 19 21:23:56 crc kubenswrapper[4886]: E0219 21:23:56.484103 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01df94fafdba17c14ef43d5b6b4162bace966f37b2c02a045d2609dbb3c4fca1\": container with ID starting with 01df94fafdba17c14ef43d5b6b4162bace966f37b2c02a045d2609dbb3c4fca1 not found: ID does not exist" containerID="01df94fafdba17c14ef43d5b6b4162bace966f37b2c02a045d2609dbb3c4fca1" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.484151 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01df94fafdba17c14ef43d5b6b4162bace966f37b2c02a045d2609dbb3c4fca1"} err="failed to get container status \"01df94fafdba17c14ef43d5b6b4162bace966f37b2c02a045d2609dbb3c4fca1\": rpc error: code = NotFound desc = could not find container \"01df94fafdba17c14ef43d5b6b4162bace966f37b2c02a045d2609dbb3c4fca1\": container with ID starting with 01df94fafdba17c14ef43d5b6b4162bace966f37b2c02a045d2609dbb3c4fca1 not found: ID does not exist" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.484178 4886 scope.go:117] "RemoveContainer" containerID="a56ba521409ae9f616f127e0a171fee74b79473b3d58d7e94b781661f8b242b3" Feb 19 21:23:56 crc kubenswrapper[4886]: E0219 21:23:56.484455 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a56ba521409ae9f616f127e0a171fee74b79473b3d58d7e94b781661f8b242b3\": container with ID starting with a56ba521409ae9f616f127e0a171fee74b79473b3d58d7e94b781661f8b242b3 not found: ID does not exist" containerID="a56ba521409ae9f616f127e0a171fee74b79473b3d58d7e94b781661f8b242b3" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.484482 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a56ba521409ae9f616f127e0a171fee74b79473b3d58d7e94b781661f8b242b3"} err="failed to get container status \"a56ba521409ae9f616f127e0a171fee74b79473b3d58d7e94b781661f8b242b3\": rpc error: code = NotFound desc = could not find container \"a56ba521409ae9f616f127e0a171fee74b79473b3d58d7e94b781661f8b242b3\": container with ID starting with a56ba521409ae9f616f127e0a171fee74b79473b3d58d7e94b781661f8b242b3 not found: ID does not exist" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.484497 4886 scope.go:117] "RemoveContainer" containerID="b00819b92c5713517370d9fe5ebe37291e0a0e660cfdd39dc9c75293aeb25446" Feb 19 21:23:56 crc kubenswrapper[4886]: E0219 21:23:56.484723 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b00819b92c5713517370d9fe5ebe37291e0a0e660cfdd39dc9c75293aeb25446\": container with ID starting with b00819b92c5713517370d9fe5ebe37291e0a0e660cfdd39dc9c75293aeb25446 not found: ID does not exist" containerID="b00819b92c5713517370d9fe5ebe37291e0a0e660cfdd39dc9c75293aeb25446" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.484750 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b00819b92c5713517370d9fe5ebe37291e0a0e660cfdd39dc9c75293aeb25446"} err="failed to get container status \"b00819b92c5713517370d9fe5ebe37291e0a0e660cfdd39dc9c75293aeb25446\": rpc error: code = NotFound desc = could not find container \"b00819b92c5713517370d9fe5ebe37291e0a0e660cfdd39dc9c75293aeb25446\": container with ID starting with b00819b92c5713517370d9fe5ebe37291e0a0e660cfdd39dc9c75293aeb25446 not found: ID does not exist" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.514443 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.526375 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8426eb13-8a64-41fd-a608-0fd4d138cca1-config-data\") pod \"nova-cell1-cell-mapping-gfkv8\" (UID: \"8426eb13-8a64-41fd-a608-0fd4d138cca1\") " pod="openstack/nova-cell1-cell-mapping-gfkv8" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.526423 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8426eb13-8a64-41fd-a608-0fd4d138cca1-scripts\") pod \"nova-cell1-cell-mapping-gfkv8\" (UID: \"8426eb13-8a64-41fd-a608-0fd4d138cca1\") " pod="openstack/nova-cell1-cell-mapping-gfkv8" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.526524 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mmg9\" (UniqueName: \"kubernetes.io/projected/8426eb13-8a64-41fd-a608-0fd4d138cca1-kube-api-access-6mmg9\") pod \"nova-cell1-cell-mapping-gfkv8\" (UID: \"8426eb13-8a64-41fd-a608-0fd4d138cca1\") " pod="openstack/nova-cell1-cell-mapping-gfkv8" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.526550 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8426eb13-8a64-41fd-a608-0fd4d138cca1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-gfkv8\" (UID: \"8426eb13-8a64-41fd-a608-0fd4d138cca1\") " pod="openstack/nova-cell1-cell-mapping-gfkv8" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.529314 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.531150 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8426eb13-8a64-41fd-a608-0fd4d138cca1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-gfkv8\" (UID: \"8426eb13-8a64-41fd-a608-0fd4d138cca1\") " pod="openstack/nova-cell1-cell-mapping-gfkv8" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.533907 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8426eb13-8a64-41fd-a608-0fd4d138cca1-config-data\") pod \"nova-cell1-cell-mapping-gfkv8\" (UID: \"8426eb13-8a64-41fd-a608-0fd4d138cca1\") " pod="openstack/nova-cell1-cell-mapping-gfkv8" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.539255 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.543854 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.557700 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.557889 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.558455 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8426eb13-8a64-41fd-a608-0fd4d138cca1-scripts\") pod \"nova-cell1-cell-mapping-gfkv8\" (UID: \"8426eb13-8a64-41fd-a608-0fd4d138cca1\") " pod="openstack/nova-cell1-cell-mapping-gfkv8" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.558659 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.558833 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mmg9\" (UniqueName: \"kubernetes.io/projected/8426eb13-8a64-41fd-a608-0fd4d138cca1-kube-api-access-6mmg9\") pod \"nova-cell1-cell-mapping-gfkv8\" (UID: \"8426eb13-8a64-41fd-a608-0fd4d138cca1\") " pod="openstack/nova-cell1-cell-mapping-gfkv8" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.617735 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30b792c2-edf7-43c5-9627-fb270dbdefc1" path="/var/lib/kubelet/pods/30b792c2-edf7-43c5-9627-fb270dbdefc1/volumes" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.618706 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9902d175-54e2-470a-b837-b69acf4393d8" path="/var/lib/kubelet/pods/9902d175-54e2-470a-b837-b69acf4393d8/volumes" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.628869 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0fbe388-9f50-4303-8ba4-5f0d849293d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " pod="openstack/ceilometer-0" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.628928 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0fbe388-9f50-4303-8ba4-5f0d849293d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " pod="openstack/ceilometer-0" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.628969 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt6xs\" (UniqueName: \"kubernetes.io/projected/d0fbe388-9f50-4303-8ba4-5f0d849293d1-kube-api-access-wt6xs\") pod \"ceilometer-0\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " pod="openstack/ceilometer-0" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.629168 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0fbe388-9f50-4303-8ba4-5f0d849293d1-config-data\") pod \"ceilometer-0\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " pod="openstack/ceilometer-0" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.629218 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0fbe388-9f50-4303-8ba4-5f0d849293d1-run-httpd\") pod \"ceilometer-0\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " pod="openstack/ceilometer-0" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.629234 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0fbe388-9f50-4303-8ba4-5f0d849293d1-log-httpd\") pod \"ceilometer-0\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " pod="openstack/ceilometer-0" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.629321 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0fbe388-9f50-4303-8ba4-5f0d849293d1-scripts\") pod \"ceilometer-0\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " pod="openstack/ceilometer-0" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.720559 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gfkv8" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.737460 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0fbe388-9f50-4303-8ba4-5f0d849293d1-config-data\") pod \"ceilometer-0\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " pod="openstack/ceilometer-0" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.737550 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0fbe388-9f50-4303-8ba4-5f0d849293d1-run-httpd\") pod \"ceilometer-0\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " pod="openstack/ceilometer-0" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.737573 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0fbe388-9f50-4303-8ba4-5f0d849293d1-log-httpd\") pod \"ceilometer-0\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " pod="openstack/ceilometer-0" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.737654 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0fbe388-9f50-4303-8ba4-5f0d849293d1-scripts\") pod \"ceilometer-0\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " pod="openstack/ceilometer-0" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.737730 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0fbe388-9f50-4303-8ba4-5f0d849293d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " pod="openstack/ceilometer-0" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.737777 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0fbe388-9f50-4303-8ba4-5f0d849293d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " pod="openstack/ceilometer-0" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.737814 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt6xs\" (UniqueName: \"kubernetes.io/projected/d0fbe388-9f50-4303-8ba4-5f0d849293d1-kube-api-access-wt6xs\") pod \"ceilometer-0\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " pod="openstack/ceilometer-0" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.738375 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0fbe388-9f50-4303-8ba4-5f0d849293d1-run-httpd\") pod \"ceilometer-0\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " pod="openstack/ceilometer-0" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.738741 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0fbe388-9f50-4303-8ba4-5f0d849293d1-log-httpd\") pod \"ceilometer-0\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " pod="openstack/ceilometer-0" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.743386 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0fbe388-9f50-4303-8ba4-5f0d849293d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " pod="openstack/ceilometer-0" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.745213 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0fbe388-9f50-4303-8ba4-5f0d849293d1-config-data\") pod \"ceilometer-0\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " pod="openstack/ceilometer-0" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.754720 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0fbe388-9f50-4303-8ba4-5f0d849293d1-scripts\") pod \"ceilometer-0\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " pod="openstack/ceilometer-0" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.755306 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt6xs\" (UniqueName: \"kubernetes.io/projected/d0fbe388-9f50-4303-8ba4-5f0d849293d1-kube-api-access-wt6xs\") pod \"ceilometer-0\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " pod="openstack/ceilometer-0" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.759602 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0fbe388-9f50-4303-8ba4-5f0d849293d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " pod="openstack/ceilometer-0" Feb 19 21:23:56 crc kubenswrapper[4886]: I0219 21:23:56.872968 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:23:57 crc kubenswrapper[4886]: I0219 21:23:57.198890 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dd8e5d10-ef29-4c16-9741-50b48418f573","Type":"ContainerStarted","Data":"33d4ad497a99ed409a19e949816cb34ec7adc099f91bb7bb3dce1a68c665489e"} Feb 19 21:23:57 crc kubenswrapper[4886]: I0219 21:23:57.199450 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dd8e5d10-ef29-4c16-9741-50b48418f573","Type":"ContainerStarted","Data":"54ccb0efa97e65aeaf926f3c90aa32a07c822ddc7a245fd385ab1f7c8dc020b0"} Feb 19 21:23:57 crc kubenswrapper[4886]: I0219 21:23:57.199538 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dd8e5d10-ef29-4c16-9741-50b48418f573","Type":"ContainerStarted","Data":"a71aa9ce6e88233fb3f3d639327443e46668495749b7f0fd0252decd70843848"} Feb 19 21:23:57 crc kubenswrapper[4886]: I0219 21:23:57.240925 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.240896725 podStartE2EDuration="2.240896725s" podCreationTimestamp="2026-02-19 21:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:23:57.2265028 +0000 UTC m=+1467.854345840" watchObservedRunningTime="2026-02-19 21:23:57.240896725 +0000 UTC m=+1467.868739775" Feb 19 21:23:57 crc kubenswrapper[4886]: I0219 21:23:57.493528 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-gfkv8"] Feb 19 21:23:57 crc kubenswrapper[4886]: I0219 21:23:57.570170 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" Feb 19 21:23:57 crc kubenswrapper[4886]: I0219 21:23:57.767548 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-8mbkb"] Feb 19 21:23:57 crc kubenswrapper[4886]: I0219 21:23:57.787629 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" podUID="3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0" containerName="dnsmasq-dns" containerID="cri-o://dfadc562e3c2e7816b4b7caa433d368b298665dc4c5bf162f7844d8af16972d0" gracePeriod=10 Feb 19 21:23:57 crc kubenswrapper[4886]: I0219 21:23:57.788289 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:23:58 crc kubenswrapper[4886]: I0219 21:23:58.233704 4886 generic.go:334] "Generic (PLEG): container finished" podID="3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0" containerID="dfadc562e3c2e7816b4b7caa433d368b298665dc4c5bf162f7844d8af16972d0" exitCode=0 Feb 19 21:23:58 crc kubenswrapper[4886]: I0219 21:23:58.234166 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" event={"ID":"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0","Type":"ContainerDied","Data":"dfadc562e3c2e7816b4b7caa433d368b298665dc4c5bf162f7844d8af16972d0"} Feb 19 21:23:58 crc kubenswrapper[4886]: I0219 21:23:58.242475 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0fbe388-9f50-4303-8ba4-5f0d849293d1","Type":"ContainerStarted","Data":"b39cccd45c7a56949d3e733d50e2f37662bdb535831f2c1f2d6bf2ea871a1d21"} Feb 19 21:23:58 crc kubenswrapper[4886]: I0219 21:23:58.245285 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gfkv8" event={"ID":"8426eb13-8a64-41fd-a608-0fd4d138cca1","Type":"ContainerStarted","Data":"5f051368ab1d2d3622745d171ab2c0994ee20061ba50c358c77770ade92477aa"} Feb 19 21:23:58 crc kubenswrapper[4886]: I0219 21:23:58.245343 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gfkv8" event={"ID":"8426eb13-8a64-41fd-a608-0fd4d138cca1","Type":"ContainerStarted","Data":"17cd188a19946f144b4b3e852d523790131335075a6381971f53a34a69b8c132"} Feb 19 21:23:58 crc kubenswrapper[4886]: I0219 21:23:58.278291 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-gfkv8" podStartSLOduration=2.278251466 podStartE2EDuration="2.278251466s" podCreationTimestamp="2026-02-19 21:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:23:58.265271956 +0000 UTC m=+1468.893115006" watchObservedRunningTime="2026-02-19 21:23:58.278251466 +0000 UTC m=+1468.906094516" Feb 19 21:23:58 crc kubenswrapper[4886]: I0219 21:23:58.500703 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" Feb 19 21:23:58 crc kubenswrapper[4886]: I0219 21:23:58.620221 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-dns-svc\") pod \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\" (UID: \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\") " Feb 19 21:23:58 crc kubenswrapper[4886]: I0219 21:23:58.620350 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-ovsdbserver-sb\") pod \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\" (UID: \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\") " Feb 19 21:23:58 crc kubenswrapper[4886]: I0219 21:23:58.620437 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-config\") pod \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\" (UID: \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\") " Feb 19 21:23:58 crc kubenswrapper[4886]: I0219 21:23:58.620518 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-ovsdbserver-nb\") pod \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\" (UID: \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\") " Feb 19 21:23:58 crc kubenswrapper[4886]: I0219 21:23:58.620695 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cphvw\" (UniqueName: \"kubernetes.io/projected/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-kube-api-access-cphvw\") pod \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\" (UID: \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\") " Feb 19 21:23:58 crc kubenswrapper[4886]: I0219 21:23:58.620733 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-dns-swift-storage-0\") pod \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\" (UID: \"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0\") " Feb 19 21:23:58 crc kubenswrapper[4886]: I0219 21:23:58.633815 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-kube-api-access-cphvw" (OuterVolumeSpecName: "kube-api-access-cphvw") pod "3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0" (UID: "3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0"). InnerVolumeSpecName "kube-api-access-cphvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:23:58 crc kubenswrapper[4886]: I0219 21:23:58.707228 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0" (UID: "3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:23:58 crc kubenswrapper[4886]: I0219 21:23:58.712157 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0" (UID: "3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:23:58 crc kubenswrapper[4886]: I0219 21:23:58.716980 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0" (UID: "3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:23:58 crc kubenswrapper[4886]: I0219 21:23:58.723889 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cphvw\" (UniqueName: \"kubernetes.io/projected/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-kube-api-access-cphvw\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:58 crc kubenswrapper[4886]: I0219 21:23:58.725252 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:58 crc kubenswrapper[4886]: I0219 21:23:58.725314 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:58 crc kubenswrapper[4886]: I0219 21:23:58.725368 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:58 crc kubenswrapper[4886]: I0219 21:23:58.731481 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-config" (OuterVolumeSpecName: "config") pod "3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0" (UID: "3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:23:58 crc kubenswrapper[4886]: I0219 21:23:58.754383 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0" (UID: "3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:23:58 crc kubenswrapper[4886]: I0219 21:23:58.827934 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:58 crc kubenswrapper[4886]: I0219 21:23:58.828218 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 21:23:59 crc kubenswrapper[4886]: I0219 21:23:59.265719 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" event={"ID":"3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0","Type":"ContainerDied","Data":"a253174d8061a8ac031f77bc991c7abcf88203e7b0557b97417b6552f371729d"} Feb 19 21:23:59 crc kubenswrapper[4886]: I0219 21:23:59.266017 4886 scope.go:117] "RemoveContainer" containerID="dfadc562e3c2e7816b4b7caa433d368b298665dc4c5bf162f7844d8af16972d0" Feb 19 21:23:59 crc kubenswrapper[4886]: I0219 21:23:59.266152 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-8mbkb" Feb 19 21:23:59 crc kubenswrapper[4886]: I0219 21:23:59.275359 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0fbe388-9f50-4303-8ba4-5f0d849293d1","Type":"ContainerStarted","Data":"31e8598587920070a9fcf9cd0f67a04dbc5b850fe4fec88708b74ecf8b6e65fb"} Feb 19 21:23:59 crc kubenswrapper[4886]: I0219 21:23:59.317403 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-8mbkb"] Feb 19 21:23:59 crc kubenswrapper[4886]: I0219 21:23:59.329786 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-8mbkb"] Feb 19 21:23:59 crc kubenswrapper[4886]: I0219 21:23:59.330127 4886 scope.go:117] "RemoveContainer" containerID="25d01f1d94092801dbc89f87de5924d0d87b5aae365fd020e46b7558d37ab937" Feb 19 21:24:00 crc kubenswrapper[4886]: I0219 21:24:00.621364 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0" path="/var/lib/kubelet/pods/3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0/volumes" Feb 19 21:24:01 crc kubenswrapper[4886]: I0219 21:24:01.303309 4886 generic.go:334] "Generic (PLEG): container finished" podID="289abf37-876b-4ef4-8782-9749f978c46f" containerID="1be42c2a3daa8ceb41689ab5d9e3a69bd9fea91cc818e0fe2fe86e66bf67768a" exitCode=137 Feb 19 21:24:01 crc kubenswrapper[4886]: I0219 21:24:01.303506 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"289abf37-876b-4ef4-8782-9749f978c46f","Type":"ContainerDied","Data":"1be42c2a3daa8ceb41689ab5d9e3a69bd9fea91cc818e0fe2fe86e66bf67768a"} Feb 19 21:24:01 crc kubenswrapper[4886]: I0219 21:24:01.880663 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.015071 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/289abf37-876b-4ef4-8782-9749f978c46f-config-data\") pod \"289abf37-876b-4ef4-8782-9749f978c46f\" (UID: \"289abf37-876b-4ef4-8782-9749f978c46f\") " Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.015158 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289abf37-876b-4ef4-8782-9749f978c46f-combined-ca-bundle\") pod \"289abf37-876b-4ef4-8782-9749f978c46f\" (UID: \"289abf37-876b-4ef4-8782-9749f978c46f\") " Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.015443 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/289abf37-876b-4ef4-8782-9749f978c46f-scripts\") pod \"289abf37-876b-4ef4-8782-9749f978c46f\" (UID: \"289abf37-876b-4ef4-8782-9749f978c46f\") " Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.015603 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhsfl\" (UniqueName: \"kubernetes.io/projected/289abf37-876b-4ef4-8782-9749f978c46f-kube-api-access-jhsfl\") pod \"289abf37-876b-4ef4-8782-9749f978c46f\" (UID: \"289abf37-876b-4ef4-8782-9749f978c46f\") " Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.021478 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/289abf37-876b-4ef4-8782-9749f978c46f-scripts" (OuterVolumeSpecName: "scripts") pod "289abf37-876b-4ef4-8782-9749f978c46f" (UID: "289abf37-876b-4ef4-8782-9749f978c46f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.023770 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/289abf37-876b-4ef4-8782-9749f978c46f-kube-api-access-jhsfl" (OuterVolumeSpecName: "kube-api-access-jhsfl") pod "289abf37-876b-4ef4-8782-9749f978c46f" (UID: "289abf37-876b-4ef4-8782-9749f978c46f"). InnerVolumeSpecName "kube-api-access-jhsfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.130278 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/289abf37-876b-4ef4-8782-9749f978c46f-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.130309 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhsfl\" (UniqueName: \"kubernetes.io/projected/289abf37-876b-4ef4-8782-9749f978c46f-kube-api-access-jhsfl\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.162476 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/289abf37-876b-4ef4-8782-9749f978c46f-config-data" (OuterVolumeSpecName: "config-data") pod "289abf37-876b-4ef4-8782-9749f978c46f" (UID: "289abf37-876b-4ef4-8782-9749f978c46f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.174072 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/289abf37-876b-4ef4-8782-9749f978c46f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "289abf37-876b-4ef4-8782-9749f978c46f" (UID: "289abf37-876b-4ef4-8782-9749f978c46f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.232516 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289abf37-876b-4ef4-8782-9749f978c46f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.232554 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/289abf37-876b-4ef4-8782-9749f978c46f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.320697 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0fbe388-9f50-4303-8ba4-5f0d849293d1","Type":"ContainerStarted","Data":"1ffcec1cf68ce314f23a5c89c45c458dfdd306761d0cb1c797acb89fcf9f5c72"} Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.324004 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"289abf37-876b-4ef4-8782-9749f978c46f","Type":"ContainerDied","Data":"6538b6d8a6bc03a3bdc9c7822675dda2cbfcfce2457427728c68495c06baa00a"} Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.324055 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.324072 4886 scope.go:117] "RemoveContainer" containerID="1be42c2a3daa8ceb41689ab5d9e3a69bd9fea91cc818e0fe2fe86e66bf67768a" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.351540 4886 scope.go:117] "RemoveContainer" containerID="9c0678faa3269b581c0a73fd70eafd59e53b5b6dcf68b9c07423c098d2dd578e" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.384000 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.403853 4886 scope.go:117] "RemoveContainer" containerID="7a5caf783c735afe9baa3d021033615037568552f2d2a8bfa85248380c696ba9" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.407247 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.429834 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 19 21:24:02 crc kubenswrapper[4886]: E0219 21:24:02.430431 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0" containerName="init" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.430454 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0" containerName="init" Feb 19 21:24:02 crc kubenswrapper[4886]: E0219 21:24:02.430471 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="289abf37-876b-4ef4-8782-9749f978c46f" containerName="aodh-listener" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.430480 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="289abf37-876b-4ef4-8782-9749f978c46f" containerName="aodh-listener" Feb 19 21:24:02 crc kubenswrapper[4886]: E0219 21:24:02.430499 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="289abf37-876b-4ef4-8782-9749f978c46f" containerName="aodh-api" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.430507 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="289abf37-876b-4ef4-8782-9749f978c46f" containerName="aodh-api" Feb 19 21:24:02 crc kubenswrapper[4886]: E0219 21:24:02.430519 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0" containerName="dnsmasq-dns" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.430530 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0" containerName="dnsmasq-dns" Feb 19 21:24:02 crc kubenswrapper[4886]: E0219 21:24:02.430558 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="289abf37-876b-4ef4-8782-9749f978c46f" containerName="aodh-notifier" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.430567 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="289abf37-876b-4ef4-8782-9749f978c46f" containerName="aodh-notifier" Feb 19 21:24:02 crc kubenswrapper[4886]: E0219 21:24:02.430581 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="289abf37-876b-4ef4-8782-9749f978c46f" containerName="aodh-evaluator" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.430590 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="289abf37-876b-4ef4-8782-9749f978c46f" containerName="aodh-evaluator" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.430855 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ad3965c-f622-4bd9-8d72-2bf3bf1b7da0" containerName="dnsmasq-dns" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.430878 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="289abf37-876b-4ef4-8782-9749f978c46f" containerName="aodh-api" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.430903 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="289abf37-876b-4ef4-8782-9749f978c46f" containerName="aodh-listener" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.430925 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="289abf37-876b-4ef4-8782-9749f978c46f" containerName="aodh-notifier" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.430939 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="289abf37-876b-4ef4-8782-9749f978c46f" containerName="aodh-evaluator" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.439164 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.440631 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.443343 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-85ctw" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.443565 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.443693 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.443808 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.447080 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.451541 4886 scope.go:117] "RemoveContainer" containerID="b6e5babe4369fa8f4e9c46a65995605769fd12d2b75c1be9e8f74cb41f0a1923" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.541688 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh27k\" (UniqueName: \"kubernetes.io/projected/d990da31-f5ca-48c3-b4bf-981e1f029e05-kube-api-access-sh27k\") pod \"aodh-0\" (UID: \"d990da31-f5ca-48c3-b4bf-981e1f029e05\") " pod="openstack/aodh-0" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.541792 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-scripts\") pod \"aodh-0\" (UID: \"d990da31-f5ca-48c3-b4bf-981e1f029e05\") " pod="openstack/aodh-0" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.541813 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-config-data\") pod \"aodh-0\" (UID: \"d990da31-f5ca-48c3-b4bf-981e1f029e05\") " pod="openstack/aodh-0" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.541854 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-internal-tls-certs\") pod \"aodh-0\" (UID: \"d990da31-f5ca-48c3-b4bf-981e1f029e05\") " pod="openstack/aodh-0" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.541876 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d990da31-f5ca-48c3-b4bf-981e1f029e05\") " pod="openstack/aodh-0" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.541893 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-public-tls-certs\") pod \"aodh-0\" (UID: \"d990da31-f5ca-48c3-b4bf-981e1f029e05\") " pod="openstack/aodh-0" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.629220 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="289abf37-876b-4ef4-8782-9749f978c46f" path="/var/lib/kubelet/pods/289abf37-876b-4ef4-8782-9749f978c46f/volumes" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.646437 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh27k\" (UniqueName: \"kubernetes.io/projected/d990da31-f5ca-48c3-b4bf-981e1f029e05-kube-api-access-sh27k\") pod \"aodh-0\" (UID: \"d990da31-f5ca-48c3-b4bf-981e1f029e05\") " pod="openstack/aodh-0" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.646606 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-scripts\") pod \"aodh-0\" (UID: \"d990da31-f5ca-48c3-b4bf-981e1f029e05\") " pod="openstack/aodh-0" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.646642 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-config-data\") pod \"aodh-0\" (UID: \"d990da31-f5ca-48c3-b4bf-981e1f029e05\") " pod="openstack/aodh-0" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.646721 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-internal-tls-certs\") pod \"aodh-0\" (UID: \"d990da31-f5ca-48c3-b4bf-981e1f029e05\") " pod="openstack/aodh-0" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.647006 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d990da31-f5ca-48c3-b4bf-981e1f029e05\") " pod="openstack/aodh-0" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.647058 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-public-tls-certs\") pod \"aodh-0\" (UID: \"d990da31-f5ca-48c3-b4bf-981e1f029e05\") " pod="openstack/aodh-0" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.651616 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-scripts\") pod \"aodh-0\" (UID: \"d990da31-f5ca-48c3-b4bf-981e1f029e05\") " pod="openstack/aodh-0" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.652321 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-config-data\") pod \"aodh-0\" (UID: \"d990da31-f5ca-48c3-b4bf-981e1f029e05\") " pod="openstack/aodh-0" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.652580 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d990da31-f5ca-48c3-b4bf-981e1f029e05\") " pod="openstack/aodh-0" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.652861 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-internal-tls-certs\") pod \"aodh-0\" (UID: \"d990da31-f5ca-48c3-b4bf-981e1f029e05\") " pod="openstack/aodh-0" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.655712 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-public-tls-certs\") pod \"aodh-0\" (UID: \"d990da31-f5ca-48c3-b4bf-981e1f029e05\") " pod="openstack/aodh-0" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.661714 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh27k\" (UniqueName: \"kubernetes.io/projected/d990da31-f5ca-48c3-b4bf-981e1f029e05-kube-api-access-sh27k\") pod \"aodh-0\" (UID: \"d990da31-f5ca-48c3-b4bf-981e1f029e05\") " pod="openstack/aodh-0" Feb 19 21:24:02 crc kubenswrapper[4886]: I0219 21:24:02.761932 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 21:24:03 crc kubenswrapper[4886]: I0219 21:24:03.267242 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 19 21:24:03 crc kubenswrapper[4886]: I0219 21:24:03.340830 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d990da31-f5ca-48c3-b4bf-981e1f029e05","Type":"ContainerStarted","Data":"b0fe2e051951d37bc434e03eb0f3cb23f98df56898182222aa2991ac33ea0bc8"} Feb 19 21:24:03 crc kubenswrapper[4886]: I0219 21:24:03.347309 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0fbe388-9f50-4303-8ba4-5f0d849293d1","Type":"ContainerStarted","Data":"ace6caab6b532b9bbbb5ab321483003aef2bf26cdfac36975465cc7abb85f4c0"} Feb 19 21:24:04 crc kubenswrapper[4886]: I0219 21:24:04.370488 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d990da31-f5ca-48c3-b4bf-981e1f029e05","Type":"ContainerStarted","Data":"823460c84a3e63bc856825be9f6026c3db48ca0fa74c728e0ee30ae237b1baf0"} Feb 19 21:24:04 crc kubenswrapper[4886]: I0219 21:24:04.377290 4886 generic.go:334] "Generic (PLEG): container finished" podID="8426eb13-8a64-41fd-a608-0fd4d138cca1" containerID="5f051368ab1d2d3622745d171ab2c0994ee20061ba50c358c77770ade92477aa" exitCode=0 Feb 19 21:24:04 crc kubenswrapper[4886]: I0219 21:24:04.377344 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gfkv8" event={"ID":"8426eb13-8a64-41fd-a608-0fd4d138cca1","Type":"ContainerDied","Data":"5f051368ab1d2d3622745d171ab2c0994ee20061ba50c358c77770ade92477aa"} Feb 19 21:24:05 crc kubenswrapper[4886]: I0219 21:24:05.400801 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d990da31-f5ca-48c3-b4bf-981e1f029e05","Type":"ContainerStarted","Data":"8f741ac56953fe2e098345e3a30dc497be7aebc1a3469b85834a0976304e6477"} Feb 19 21:24:05 crc kubenswrapper[4886]: I0219 21:24:05.407489 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0fbe388-9f50-4303-8ba4-5f0d849293d1","Type":"ContainerStarted","Data":"672433eb3c5dbb15cd84553accd29c9a2ad9841f81db48c81b1121f4f006c40f"} Feb 19 21:24:05 crc kubenswrapper[4886]: I0219 21:24:05.407611 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 21:24:05 crc kubenswrapper[4886]: I0219 21:24:05.449423 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.739393887 podStartE2EDuration="9.449396148s" podCreationTimestamp="2026-02-19 21:23:56 +0000 UTC" firstStartedPulling="2026-02-19 21:23:57.738059675 +0000 UTC m=+1468.365902725" lastFinishedPulling="2026-02-19 21:24:04.448061936 +0000 UTC m=+1475.075904986" observedRunningTime="2026-02-19 21:24:05.433327462 +0000 UTC m=+1476.061170522" watchObservedRunningTime="2026-02-19 21:24:05.449396148 +0000 UTC m=+1476.077239198" Feb 19 21:24:05 crc kubenswrapper[4886]: I0219 21:24:05.667228 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 21:24:05 crc kubenswrapper[4886]: I0219 21:24:05.667755 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.086193 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gfkv8" Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.153103 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8426eb13-8a64-41fd-a608-0fd4d138cca1-config-data\") pod \"8426eb13-8a64-41fd-a608-0fd4d138cca1\" (UID: \"8426eb13-8a64-41fd-a608-0fd4d138cca1\") " Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.153200 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mmg9\" (UniqueName: \"kubernetes.io/projected/8426eb13-8a64-41fd-a608-0fd4d138cca1-kube-api-access-6mmg9\") pod \"8426eb13-8a64-41fd-a608-0fd4d138cca1\" (UID: \"8426eb13-8a64-41fd-a608-0fd4d138cca1\") " Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.153316 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8426eb13-8a64-41fd-a608-0fd4d138cca1-scripts\") pod \"8426eb13-8a64-41fd-a608-0fd4d138cca1\" (UID: \"8426eb13-8a64-41fd-a608-0fd4d138cca1\") " Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.153373 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8426eb13-8a64-41fd-a608-0fd4d138cca1-combined-ca-bundle\") pod \"8426eb13-8a64-41fd-a608-0fd4d138cca1\" (UID: \"8426eb13-8a64-41fd-a608-0fd4d138cca1\") " Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.162936 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8426eb13-8a64-41fd-a608-0fd4d138cca1-kube-api-access-6mmg9" (OuterVolumeSpecName: "kube-api-access-6mmg9") pod "8426eb13-8a64-41fd-a608-0fd4d138cca1" (UID: "8426eb13-8a64-41fd-a608-0fd4d138cca1"). InnerVolumeSpecName "kube-api-access-6mmg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.179380 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8426eb13-8a64-41fd-a608-0fd4d138cca1-scripts" (OuterVolumeSpecName: "scripts") pod "8426eb13-8a64-41fd-a608-0fd4d138cca1" (UID: "8426eb13-8a64-41fd-a608-0fd4d138cca1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.248173 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8426eb13-8a64-41fd-a608-0fd4d138cca1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8426eb13-8a64-41fd-a608-0fd4d138cca1" (UID: "8426eb13-8a64-41fd-a608-0fd4d138cca1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.257320 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mmg9\" (UniqueName: \"kubernetes.io/projected/8426eb13-8a64-41fd-a608-0fd4d138cca1-kube-api-access-6mmg9\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.257354 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8426eb13-8a64-41fd-a608-0fd4d138cca1-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.257369 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8426eb13-8a64-41fd-a608-0fd4d138cca1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.263926 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8426eb13-8a64-41fd-a608-0fd4d138cca1-config-data" (OuterVolumeSpecName: "config-data") pod "8426eb13-8a64-41fd-a608-0fd4d138cca1" (UID: "8426eb13-8a64-41fd-a608-0fd4d138cca1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.360255 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8426eb13-8a64-41fd-a608-0fd4d138cca1-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.423506 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d990da31-f5ca-48c3-b4bf-981e1f029e05","Type":"ContainerStarted","Data":"e49adc275c2a434518c049caf2809ce53c85654c83d79bc592269ad420b80c87"} Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.423873 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d990da31-f5ca-48c3-b4bf-981e1f029e05","Type":"ContainerStarted","Data":"7e0c0a5e5dae4d2e07a4ee911d0b1c3ad6fd5d02319113899e823f63942383f5"} Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.426965 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gfkv8" event={"ID":"8426eb13-8a64-41fd-a608-0fd4d138cca1","Type":"ContainerDied","Data":"17cd188a19946f144b4b3e852d523790131335075a6381971f53a34a69b8c132"} Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.427011 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17cd188a19946f144b4b3e852d523790131335075a6381971f53a34a69b8c132" Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.427050 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gfkv8" Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.451109 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.810753509 podStartE2EDuration="4.451092828s" podCreationTimestamp="2026-02-19 21:24:02 +0000 UTC" firstStartedPulling="2026-02-19 21:24:03.271153654 +0000 UTC m=+1473.898996704" lastFinishedPulling="2026-02-19 21:24:05.911492973 +0000 UTC m=+1476.539336023" observedRunningTime="2026-02-19 21:24:06.447610762 +0000 UTC m=+1477.075453812" watchObservedRunningTime="2026-02-19 21:24:06.451092828 +0000 UTC m=+1477.078935878" Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.594581 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.644526 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.644754 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d2615997-dab1-4bc3-a5cd-1f89031674c8" containerName="nova-scheduler-scheduler" containerID="cri-o://e434787c1ac46e3a529d26a14b6c40f92f648f09e2b8b7d25fff60b0cb7b6c83" gracePeriod=30 Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.660483 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.660724 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2fe39c39-ee1f-49c5-bd83-a984ee05475d" containerName="nova-metadata-log" containerID="cri-o://39192fd525ed71777feb31914082938616323fc2c5b40d13abc233f694b6078d" gracePeriod=30 Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.661187 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2fe39c39-ee1f-49c5-bd83-a984ee05475d" containerName="nova-metadata-metadata" containerID="cri-o://2960778f030236b823fbbda56c18ddf34bce71d650de3ab8ee27ac78cfa6a373" gracePeriod=30 Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.686480 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dd8e5d10-ef29-4c16-9741-50b48418f573" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.254:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 21:24:06 crc kubenswrapper[4886]: I0219 21:24:06.686490 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dd8e5d10-ef29-4c16-9741-50b48418f573" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.254:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 21:24:07 crc kubenswrapper[4886]: I0219 21:24:07.440085 4886 generic.go:334] "Generic (PLEG): container finished" podID="2fe39c39-ee1f-49c5-bd83-a984ee05475d" containerID="39192fd525ed71777feb31914082938616323fc2c5b40d13abc233f694b6078d" exitCode=143 Feb 19 21:24:07 crc kubenswrapper[4886]: I0219 21:24:07.440310 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2fe39c39-ee1f-49c5-bd83-a984ee05475d","Type":"ContainerDied","Data":"39192fd525ed71777feb31914082938616323fc2c5b40d13abc233f694b6078d"} Feb 19 21:24:07 crc kubenswrapper[4886]: I0219 21:24:07.441615 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dd8e5d10-ef29-4c16-9741-50b48418f573" containerName="nova-api-log" containerID="cri-o://54ccb0efa97e65aeaf926f3c90aa32a07c822ddc7a245fd385ab1f7c8dc020b0" gracePeriod=30 Feb 19 21:24:07 crc kubenswrapper[4886]: I0219 21:24:07.442097 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dd8e5d10-ef29-4c16-9741-50b48418f573" containerName="nova-api-api" containerID="cri-o://33d4ad497a99ed409a19e949816cb34ec7adc099f91bb7bb3dce1a68c665489e" gracePeriod=30 Feb 19 21:24:08 crc kubenswrapper[4886]: I0219 21:24:08.452805 4886 generic.go:334] "Generic (PLEG): container finished" podID="dd8e5d10-ef29-4c16-9741-50b48418f573" containerID="54ccb0efa97e65aeaf926f3c90aa32a07c822ddc7a245fd385ab1f7c8dc020b0" exitCode=143 Feb 19 21:24:08 crc kubenswrapper[4886]: I0219 21:24:08.453048 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dd8e5d10-ef29-4c16-9741-50b48418f573","Type":"ContainerDied","Data":"54ccb0efa97e65aeaf926f3c90aa32a07c822ddc7a245fd385ab1f7c8dc020b0"} Feb 19 21:24:09 crc kubenswrapper[4886]: I0219 21:24:09.794942 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2fe39c39-ee1f-49c5-bd83-a984ee05475d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.248:8775/\": read tcp 10.217.0.2:47794->10.217.0.248:8775: read: connection reset by peer" Feb 19 21:24:09 crc kubenswrapper[4886]: I0219 21:24:09.794989 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2fe39c39-ee1f-49c5-bd83-a984ee05475d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.248:8775/\": read tcp 10.217.0.2:47808->10.217.0.248:8775: read: connection reset by peer" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.452282 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.496400 4886 generic.go:334] "Generic (PLEG): container finished" podID="2fe39c39-ee1f-49c5-bd83-a984ee05475d" containerID="2960778f030236b823fbbda56c18ddf34bce71d650de3ab8ee27ac78cfa6a373" exitCode=0 Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.496478 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2fe39c39-ee1f-49c5-bd83-a984ee05475d","Type":"ContainerDied","Data":"2960778f030236b823fbbda56c18ddf34bce71d650de3ab8ee27ac78cfa6a373"} Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.496505 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2fe39c39-ee1f-49c5-bd83-a984ee05475d","Type":"ContainerDied","Data":"54f23a724a94b214441c93a63f27ae4b285ea2bdbb1d77f1660cbdf22a409311"} Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.496521 4886 scope.go:117] "RemoveContainer" containerID="2960778f030236b823fbbda56c18ddf34bce71d650de3ab8ee27ac78cfa6a373" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.496646 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.499600 4886 generic.go:334] "Generic (PLEG): container finished" podID="d2615997-dab1-4bc3-a5cd-1f89031674c8" containerID="e434787c1ac46e3a529d26a14b6c40f92f648f09e2b8b7d25fff60b0cb7b6c83" exitCode=0 Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.499672 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d2615997-dab1-4bc3-a5cd-1f89031674c8","Type":"ContainerDied","Data":"e434787c1ac46e3a529d26a14b6c40f92f648f09e2b8b7d25fff60b0cb7b6c83"} Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.524500 4886 scope.go:117] "RemoveContainer" containerID="39192fd525ed71777feb31914082938616323fc2c5b40d13abc233f694b6078d" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.557608 4886 scope.go:117] "RemoveContainer" containerID="2960778f030236b823fbbda56c18ddf34bce71d650de3ab8ee27ac78cfa6a373" Feb 19 21:24:10 crc kubenswrapper[4886]: E0219 21:24:10.558003 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2960778f030236b823fbbda56c18ddf34bce71d650de3ab8ee27ac78cfa6a373\": container with ID starting with 2960778f030236b823fbbda56c18ddf34bce71d650de3ab8ee27ac78cfa6a373 not found: ID does not exist" containerID="2960778f030236b823fbbda56c18ddf34bce71d650de3ab8ee27ac78cfa6a373" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.558039 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2960778f030236b823fbbda56c18ddf34bce71d650de3ab8ee27ac78cfa6a373"} err="failed to get container status \"2960778f030236b823fbbda56c18ddf34bce71d650de3ab8ee27ac78cfa6a373\": rpc error: code = NotFound desc = could not find container \"2960778f030236b823fbbda56c18ddf34bce71d650de3ab8ee27ac78cfa6a373\": container with ID starting with 2960778f030236b823fbbda56c18ddf34bce71d650de3ab8ee27ac78cfa6a373 not found: ID does not exist" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.558064 4886 scope.go:117] "RemoveContainer" containerID="39192fd525ed71777feb31914082938616323fc2c5b40d13abc233f694b6078d" Feb 19 21:24:10 crc kubenswrapper[4886]: E0219 21:24:10.558314 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39192fd525ed71777feb31914082938616323fc2c5b40d13abc233f694b6078d\": container with ID starting with 39192fd525ed71777feb31914082938616323fc2c5b40d13abc233f694b6078d not found: ID does not exist" containerID="39192fd525ed71777feb31914082938616323fc2c5b40d13abc233f694b6078d" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.558339 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39192fd525ed71777feb31914082938616323fc2c5b40d13abc233f694b6078d"} err="failed to get container status \"39192fd525ed71777feb31914082938616323fc2c5b40d13abc233f694b6078d\": rpc error: code = NotFound desc = could not find container \"39192fd525ed71777feb31914082938616323fc2c5b40d13abc233f694b6078d\": container with ID starting with 39192fd525ed71777feb31914082938616323fc2c5b40d13abc233f694b6078d not found: ID does not exist" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.571441 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fe39c39-ee1f-49c5-bd83-a984ee05475d-logs\") pod \"2fe39c39-ee1f-49c5-bd83-a984ee05475d\" (UID: \"2fe39c39-ee1f-49c5-bd83-a984ee05475d\") " Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.571586 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fe39c39-ee1f-49c5-bd83-a984ee05475d-nova-metadata-tls-certs\") pod \"2fe39c39-ee1f-49c5-bd83-a984ee05475d\" (UID: \"2fe39c39-ee1f-49c5-bd83-a984ee05475d\") " Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.571714 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe39c39-ee1f-49c5-bd83-a984ee05475d-config-data\") pod \"2fe39c39-ee1f-49c5-bd83-a984ee05475d\" (UID: \"2fe39c39-ee1f-49c5-bd83-a984ee05475d\") " Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.571810 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe39c39-ee1f-49c5-bd83-a984ee05475d-combined-ca-bundle\") pod \"2fe39c39-ee1f-49c5-bd83-a984ee05475d\" (UID: \"2fe39c39-ee1f-49c5-bd83-a984ee05475d\") " Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.571917 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2cxr\" (UniqueName: \"kubernetes.io/projected/2fe39c39-ee1f-49c5-bd83-a984ee05475d-kube-api-access-v2cxr\") pod \"2fe39c39-ee1f-49c5-bd83-a984ee05475d\" (UID: \"2fe39c39-ee1f-49c5-bd83-a984ee05475d\") " Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.576060 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fe39c39-ee1f-49c5-bd83-a984ee05475d-logs" (OuterVolumeSpecName: "logs") pod "2fe39c39-ee1f-49c5-bd83-a984ee05475d" (UID: "2fe39c39-ee1f-49c5-bd83-a984ee05475d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.583689 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fe39c39-ee1f-49c5-bd83-a984ee05475d-kube-api-access-v2cxr" (OuterVolumeSpecName: "kube-api-access-v2cxr") pod "2fe39c39-ee1f-49c5-bd83-a984ee05475d" (UID: "2fe39c39-ee1f-49c5-bd83-a984ee05475d"). InnerVolumeSpecName "kube-api-access-v2cxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.606273 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fe39c39-ee1f-49c5-bd83-a984ee05475d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fe39c39-ee1f-49c5-bd83-a984ee05475d" (UID: "2fe39c39-ee1f-49c5-bd83-a984ee05475d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.608346 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fe39c39-ee1f-49c5-bd83-a984ee05475d-config-data" (OuterVolumeSpecName: "config-data") pod "2fe39c39-ee1f-49c5-bd83-a984ee05475d" (UID: "2fe39c39-ee1f-49c5-bd83-a984ee05475d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.642687 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.682042 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe39c39-ee1f-49c5-bd83-a984ee05475d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.682086 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2cxr\" (UniqueName: \"kubernetes.io/projected/2fe39c39-ee1f-49c5-bd83-a984ee05475d-kube-api-access-v2cxr\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.682108 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fe39c39-ee1f-49c5-bd83-a984ee05475d-logs\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.682124 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe39c39-ee1f-49c5-bd83-a984ee05475d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.689065 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fe39c39-ee1f-49c5-bd83-a984ee05475d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "2fe39c39-ee1f-49c5-bd83-a984ee05475d" (UID: "2fe39c39-ee1f-49c5-bd83-a984ee05475d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.784416 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcsd8\" (UniqueName: \"kubernetes.io/projected/d2615997-dab1-4bc3-a5cd-1f89031674c8-kube-api-access-fcsd8\") pod \"d2615997-dab1-4bc3-a5cd-1f89031674c8\" (UID: \"d2615997-dab1-4bc3-a5cd-1f89031674c8\") " Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.784522 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2615997-dab1-4bc3-a5cd-1f89031674c8-config-data\") pod \"d2615997-dab1-4bc3-a5cd-1f89031674c8\" (UID: \"d2615997-dab1-4bc3-a5cd-1f89031674c8\") " Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.784681 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2615997-dab1-4bc3-a5cd-1f89031674c8-combined-ca-bundle\") pod \"d2615997-dab1-4bc3-a5cd-1f89031674c8\" (UID: \"d2615997-dab1-4bc3-a5cd-1f89031674c8\") " Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.785809 4886 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fe39c39-ee1f-49c5-bd83-a984ee05475d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.787724 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2615997-dab1-4bc3-a5cd-1f89031674c8-kube-api-access-fcsd8" (OuterVolumeSpecName: "kube-api-access-fcsd8") pod "d2615997-dab1-4bc3-a5cd-1f89031674c8" (UID: "d2615997-dab1-4bc3-a5cd-1f89031674c8"). InnerVolumeSpecName "kube-api-access-fcsd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.813020 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2615997-dab1-4bc3-a5cd-1f89031674c8-config-data" (OuterVolumeSpecName: "config-data") pod "d2615997-dab1-4bc3-a5cd-1f89031674c8" (UID: "d2615997-dab1-4bc3-a5cd-1f89031674c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.818825 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2615997-dab1-4bc3-a5cd-1f89031674c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2615997-dab1-4bc3-a5cd-1f89031674c8" (UID: "d2615997-dab1-4bc3-a5cd-1f89031674c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.843787 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.857321 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.876415 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 21:24:10 crc kubenswrapper[4886]: E0219 21:24:10.877150 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe39c39-ee1f-49c5-bd83-a984ee05475d" containerName="nova-metadata-log" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.877175 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe39c39-ee1f-49c5-bd83-a984ee05475d" containerName="nova-metadata-log" Feb 19 21:24:10 crc kubenswrapper[4886]: E0219 21:24:10.877237 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe39c39-ee1f-49c5-bd83-a984ee05475d" containerName="nova-metadata-metadata" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.877246 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe39c39-ee1f-49c5-bd83-a984ee05475d" containerName="nova-metadata-metadata" Feb 19 21:24:10 crc kubenswrapper[4886]: E0219 21:24:10.877282 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8426eb13-8a64-41fd-a608-0fd4d138cca1" containerName="nova-manage" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.877291 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8426eb13-8a64-41fd-a608-0fd4d138cca1" containerName="nova-manage" Feb 19 21:24:10 crc kubenswrapper[4886]: E0219 21:24:10.877299 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2615997-dab1-4bc3-a5cd-1f89031674c8" containerName="nova-scheduler-scheduler" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.877309 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2615997-dab1-4bc3-a5cd-1f89031674c8" containerName="nova-scheduler-scheduler" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.877571 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe39c39-ee1f-49c5-bd83-a984ee05475d" containerName="nova-metadata-metadata" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.877607 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe39c39-ee1f-49c5-bd83-a984ee05475d" containerName="nova-metadata-log" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.877620 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="8426eb13-8a64-41fd-a608-0fd4d138cca1" containerName="nova-manage" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.877636 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2615997-dab1-4bc3-a5cd-1f89031674c8" containerName="nova-scheduler-scheduler" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.881409 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.884635 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.885308 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.885719 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.888001 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcsd8\" (UniqueName: \"kubernetes.io/projected/d2615997-dab1-4bc3-a5cd-1f89031674c8-kube-api-access-fcsd8\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.888025 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2615997-dab1-4bc3-a5cd-1f89031674c8-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.888035 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2615997-dab1-4bc3-a5cd-1f89031674c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.989998 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvht2\" (UniqueName: \"kubernetes.io/projected/e80b1a73-65bd-4626-a093-7464dbb3cb56-kube-api-access-jvht2\") pod \"nova-metadata-0\" (UID: \"e80b1a73-65bd-4626-a093-7464dbb3cb56\") " pod="openstack/nova-metadata-0" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.990093 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e80b1a73-65bd-4626-a093-7464dbb3cb56-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e80b1a73-65bd-4626-a093-7464dbb3cb56\") " pod="openstack/nova-metadata-0" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.990303 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80b1a73-65bd-4626-a093-7464dbb3cb56-config-data\") pod \"nova-metadata-0\" (UID: \"e80b1a73-65bd-4626-a093-7464dbb3cb56\") " pod="openstack/nova-metadata-0" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.990455 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e80b1a73-65bd-4626-a093-7464dbb3cb56-logs\") pod \"nova-metadata-0\" (UID: \"e80b1a73-65bd-4626-a093-7464dbb3cb56\") " pod="openstack/nova-metadata-0" Feb 19 21:24:10 crc kubenswrapper[4886]: I0219 21:24:10.990578 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80b1a73-65bd-4626-a093-7464dbb3cb56-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e80b1a73-65bd-4626-a093-7464dbb3cb56\") " pod="openstack/nova-metadata-0" Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.092610 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvht2\" (UniqueName: \"kubernetes.io/projected/e80b1a73-65bd-4626-a093-7464dbb3cb56-kube-api-access-jvht2\") pod \"nova-metadata-0\" (UID: \"e80b1a73-65bd-4626-a093-7464dbb3cb56\") " pod="openstack/nova-metadata-0" Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.092691 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e80b1a73-65bd-4626-a093-7464dbb3cb56-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e80b1a73-65bd-4626-a093-7464dbb3cb56\") " pod="openstack/nova-metadata-0" Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.092773 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80b1a73-65bd-4626-a093-7464dbb3cb56-config-data\") pod \"nova-metadata-0\" (UID: \"e80b1a73-65bd-4626-a093-7464dbb3cb56\") " pod="openstack/nova-metadata-0" Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.092813 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e80b1a73-65bd-4626-a093-7464dbb3cb56-logs\") pod \"nova-metadata-0\" (UID: \"e80b1a73-65bd-4626-a093-7464dbb3cb56\") " pod="openstack/nova-metadata-0" Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.092851 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80b1a73-65bd-4626-a093-7464dbb3cb56-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e80b1a73-65bd-4626-a093-7464dbb3cb56\") " pod="openstack/nova-metadata-0" Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.093458 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e80b1a73-65bd-4626-a093-7464dbb3cb56-logs\") pod \"nova-metadata-0\" (UID: \"e80b1a73-65bd-4626-a093-7464dbb3cb56\") " pod="openstack/nova-metadata-0" Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.097682 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80b1a73-65bd-4626-a093-7464dbb3cb56-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e80b1a73-65bd-4626-a093-7464dbb3cb56\") " pod="openstack/nova-metadata-0" Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.097743 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80b1a73-65bd-4626-a093-7464dbb3cb56-config-data\") pod \"nova-metadata-0\" (UID: \"e80b1a73-65bd-4626-a093-7464dbb3cb56\") " pod="openstack/nova-metadata-0" Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.105879 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e80b1a73-65bd-4626-a093-7464dbb3cb56-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e80b1a73-65bd-4626-a093-7464dbb3cb56\") " pod="openstack/nova-metadata-0" Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.112686 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvht2\" (UniqueName: \"kubernetes.io/projected/e80b1a73-65bd-4626-a093-7464dbb3cb56-kube-api-access-jvht2\") pod \"nova-metadata-0\" (UID: \"e80b1a73-65bd-4626-a093-7464dbb3cb56\") " pod="openstack/nova-metadata-0" Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.276345 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.520761 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d2615997-dab1-4bc3-a5cd-1f89031674c8","Type":"ContainerDied","Data":"91440b02bbdf51b6dbf25b4ffdc4828103a98877db388884de5a446f20350cc4"} Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.521293 4886 scope.go:117] "RemoveContainer" containerID="e434787c1ac46e3a529d26a14b6c40f92f648f09e2b8b7d25fff60b0cb7b6c83" Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.521522 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.571162 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.597762 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.628717 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.630831 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.639803 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.664677 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.708053 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/722c0850-5f3e-461f-b133-6f9fb91dfd59-config-data\") pod \"nova-scheduler-0\" (UID: \"722c0850-5f3e-461f-b133-6f9fb91dfd59\") " pod="openstack/nova-scheduler-0" Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.708113 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/722c0850-5f3e-461f-b133-6f9fb91dfd59-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"722c0850-5f3e-461f-b133-6f9fb91dfd59\") " pod="openstack/nova-scheduler-0" Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.708235 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq9nc\" (UniqueName: \"kubernetes.io/projected/722c0850-5f3e-461f-b133-6f9fb91dfd59-kube-api-access-pq9nc\") pod \"nova-scheduler-0\" (UID: \"722c0850-5f3e-461f-b133-6f9fb91dfd59\") " pod="openstack/nova-scheduler-0" Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.812837 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq9nc\" (UniqueName: \"kubernetes.io/projected/722c0850-5f3e-461f-b133-6f9fb91dfd59-kube-api-access-pq9nc\") pod \"nova-scheduler-0\" (UID: \"722c0850-5f3e-461f-b133-6f9fb91dfd59\") " pod="openstack/nova-scheduler-0" Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.813044 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/722c0850-5f3e-461f-b133-6f9fb91dfd59-config-data\") pod \"nova-scheduler-0\" (UID: \"722c0850-5f3e-461f-b133-6f9fb91dfd59\") " pod="openstack/nova-scheduler-0" Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.813071 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/722c0850-5f3e-461f-b133-6f9fb91dfd59-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"722c0850-5f3e-461f-b133-6f9fb91dfd59\") " pod="openstack/nova-scheduler-0" Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.821607 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/722c0850-5f3e-461f-b133-6f9fb91dfd59-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"722c0850-5f3e-461f-b133-6f9fb91dfd59\") " pod="openstack/nova-scheduler-0" Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.837361 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/722c0850-5f3e-461f-b133-6f9fb91dfd59-config-data\") pod \"nova-scheduler-0\" (UID: \"722c0850-5f3e-461f-b133-6f9fb91dfd59\") " pod="openstack/nova-scheduler-0" Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.842008 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq9nc\" (UniqueName: \"kubernetes.io/projected/722c0850-5f3e-461f-b133-6f9fb91dfd59-kube-api-access-pq9nc\") pod \"nova-scheduler-0\" (UID: \"722c0850-5f3e-461f-b133-6f9fb91dfd59\") " pod="openstack/nova-scheduler-0" Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.850458 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 21:24:11 crc kubenswrapper[4886]: I0219 21:24:11.960743 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 21:24:12 crc kubenswrapper[4886]: W0219 21:24:12.484434 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod722c0850_5f3e_461f_b133_6f9fb91dfd59.slice/crio-bf4d57c17cf25ab0ddaaec2675d97f10399a058c20bb36f9259fc5df1fe22ea4 WatchSource:0}: Error finding container bf4d57c17cf25ab0ddaaec2675d97f10399a058c20bb36f9259fc5df1fe22ea4: Status 404 returned error can't find the container with id bf4d57c17cf25ab0ddaaec2675d97f10399a058c20bb36f9259fc5df1fe22ea4 Feb 19 21:24:12 crc kubenswrapper[4886]: I0219 21:24:12.486806 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 21:24:12 crc kubenswrapper[4886]: I0219 21:24:12.536154 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"722c0850-5f3e-461f-b133-6f9fb91dfd59","Type":"ContainerStarted","Data":"bf4d57c17cf25ab0ddaaec2675d97f10399a058c20bb36f9259fc5df1fe22ea4"} Feb 19 21:24:12 crc kubenswrapper[4886]: I0219 21:24:12.538840 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e80b1a73-65bd-4626-a093-7464dbb3cb56","Type":"ContainerStarted","Data":"1618450818ff5d92dbac6188a1319ca4b92eb84d5dde9d92b8fe71b73b0f9abf"} Feb 19 21:24:12 crc kubenswrapper[4886]: I0219 21:24:12.538882 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e80b1a73-65bd-4626-a093-7464dbb3cb56","Type":"ContainerStarted","Data":"d6aa9ff459fa40d829c40762b88ceaab86b56d55a7f221fe6cabaea5173f90ad"} Feb 19 21:24:12 crc kubenswrapper[4886]: I0219 21:24:12.538900 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e80b1a73-65bd-4626-a093-7464dbb3cb56","Type":"ContainerStarted","Data":"12b17234a2926bfe24d48c94e5e82bb2dbc4390825e83fe4623fa880447b9d67"} Feb 19 21:24:12 crc kubenswrapper[4886]: I0219 21:24:12.573512 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.57349382 podStartE2EDuration="2.57349382s" podCreationTimestamp="2026-02-19 21:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:24:12.565159115 +0000 UTC m=+1483.193002165" watchObservedRunningTime="2026-02-19 21:24:12.57349382 +0000 UTC m=+1483.201336860" Feb 19 21:24:12 crc kubenswrapper[4886]: I0219 21:24:12.618111 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fe39c39-ee1f-49c5-bd83-a984ee05475d" path="/var/lib/kubelet/pods/2fe39c39-ee1f-49c5-bd83-a984ee05475d/volumes" Feb 19 21:24:12 crc kubenswrapper[4886]: I0219 21:24:12.619142 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2615997-dab1-4bc3-a5cd-1f89031674c8" path="/var/lib/kubelet/pods/d2615997-dab1-4bc3-a5cd-1f89031674c8/volumes" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.308457 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.348812 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd8e5d10-ef29-4c16-9741-50b48418f573-combined-ca-bundle\") pod \"dd8e5d10-ef29-4c16-9741-50b48418f573\" (UID: \"dd8e5d10-ef29-4c16-9741-50b48418f573\") " Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.348977 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd8e5d10-ef29-4c16-9741-50b48418f573-public-tls-certs\") pod \"dd8e5d10-ef29-4c16-9741-50b48418f573\" (UID: \"dd8e5d10-ef29-4c16-9741-50b48418f573\") " Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.349048 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd8e5d10-ef29-4c16-9741-50b48418f573-logs\") pod \"dd8e5d10-ef29-4c16-9741-50b48418f573\" (UID: \"dd8e5d10-ef29-4c16-9741-50b48418f573\") " Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.349119 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5hsd\" (UniqueName: \"kubernetes.io/projected/dd8e5d10-ef29-4c16-9741-50b48418f573-kube-api-access-k5hsd\") pod \"dd8e5d10-ef29-4c16-9741-50b48418f573\" (UID: \"dd8e5d10-ef29-4c16-9741-50b48418f573\") " Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.349206 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd8e5d10-ef29-4c16-9741-50b48418f573-internal-tls-certs\") pod \"dd8e5d10-ef29-4c16-9741-50b48418f573\" (UID: \"dd8e5d10-ef29-4c16-9741-50b48418f573\") " Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.349334 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd8e5d10-ef29-4c16-9741-50b48418f573-config-data\") pod \"dd8e5d10-ef29-4c16-9741-50b48418f573\" (UID: \"dd8e5d10-ef29-4c16-9741-50b48418f573\") " Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.349986 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd8e5d10-ef29-4c16-9741-50b48418f573-logs" (OuterVolumeSpecName: "logs") pod "dd8e5d10-ef29-4c16-9741-50b48418f573" (UID: "dd8e5d10-ef29-4c16-9741-50b48418f573"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.351084 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd8e5d10-ef29-4c16-9741-50b48418f573-logs\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.364681 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd8e5d10-ef29-4c16-9741-50b48418f573-kube-api-access-k5hsd" (OuterVolumeSpecName: "kube-api-access-k5hsd") pod "dd8e5d10-ef29-4c16-9741-50b48418f573" (UID: "dd8e5d10-ef29-4c16-9741-50b48418f573"). InnerVolumeSpecName "kube-api-access-k5hsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.402425 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd8e5d10-ef29-4c16-9741-50b48418f573-config-data" (OuterVolumeSpecName: "config-data") pod "dd8e5d10-ef29-4c16-9741-50b48418f573" (UID: "dd8e5d10-ef29-4c16-9741-50b48418f573"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.453331 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5hsd\" (UniqueName: \"kubernetes.io/projected/dd8e5d10-ef29-4c16-9741-50b48418f573-kube-api-access-k5hsd\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.453362 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd8e5d10-ef29-4c16-9741-50b48418f573-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.453459 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd8e5d10-ef29-4c16-9741-50b48418f573-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd8e5d10-ef29-4c16-9741-50b48418f573" (UID: "dd8e5d10-ef29-4c16-9741-50b48418f573"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.477643 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd8e5d10-ef29-4c16-9741-50b48418f573-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dd8e5d10-ef29-4c16-9741-50b48418f573" (UID: "dd8e5d10-ef29-4c16-9741-50b48418f573"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.500856 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd8e5d10-ef29-4c16-9741-50b48418f573-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dd8e5d10-ef29-4c16-9741-50b48418f573" (UID: "dd8e5d10-ef29-4c16-9741-50b48418f573"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.561759 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"722c0850-5f3e-461f-b133-6f9fb91dfd59","Type":"ContainerStarted","Data":"6d6b7a7fcb7a0561e4df9870118d042bc69bed89819f4cfbf389e0c9f0408b50"} Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.562693 4886 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd8e5d10-ef29-4c16-9741-50b48418f573-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.562854 4886 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd8e5d10-ef29-4c16-9741-50b48418f573-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.562870 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd8e5d10-ef29-4c16-9741-50b48418f573-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.564290 4886 generic.go:334] "Generic (PLEG): container finished" podID="dd8e5d10-ef29-4c16-9741-50b48418f573" containerID="33d4ad497a99ed409a19e949816cb34ec7adc099f91bb7bb3dce1a68c665489e" exitCode=0 Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.564356 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.564396 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dd8e5d10-ef29-4c16-9741-50b48418f573","Type":"ContainerDied","Data":"33d4ad497a99ed409a19e949816cb34ec7adc099f91bb7bb3dce1a68c665489e"} Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.564428 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dd8e5d10-ef29-4c16-9741-50b48418f573","Type":"ContainerDied","Data":"a71aa9ce6e88233fb3f3d639327443e46668495749b7f0fd0252decd70843848"} Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.564446 4886 scope.go:117] "RemoveContainer" containerID="33d4ad497a99ed409a19e949816cb34ec7adc099f91bb7bb3dce1a68c665489e" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.582970 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.582951082 podStartE2EDuration="2.582951082s" podCreationTimestamp="2026-02-19 21:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:24:13.580113792 +0000 UTC m=+1484.207956842" watchObservedRunningTime="2026-02-19 21:24:13.582951082 +0000 UTC m=+1484.210794132" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.592536 4886 scope.go:117] "RemoveContainer" containerID="54ccb0efa97e65aeaf926f3c90aa32a07c822ddc7a245fd385ab1f7c8dc020b0" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.612318 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.631842 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.635432 4886 scope.go:117] "RemoveContainer" containerID="33d4ad497a99ed409a19e949816cb34ec7adc099f91bb7bb3dce1a68c665489e" Feb 19 21:24:13 crc kubenswrapper[4886]: E0219 21:24:13.636784 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33d4ad497a99ed409a19e949816cb34ec7adc099f91bb7bb3dce1a68c665489e\": container with ID starting with 33d4ad497a99ed409a19e949816cb34ec7adc099f91bb7bb3dce1a68c665489e not found: ID does not exist" containerID="33d4ad497a99ed409a19e949816cb34ec7adc099f91bb7bb3dce1a68c665489e" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.636838 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33d4ad497a99ed409a19e949816cb34ec7adc099f91bb7bb3dce1a68c665489e"} err="failed to get container status \"33d4ad497a99ed409a19e949816cb34ec7adc099f91bb7bb3dce1a68c665489e\": rpc error: code = NotFound desc = could not find container \"33d4ad497a99ed409a19e949816cb34ec7adc099f91bb7bb3dce1a68c665489e\": container with ID starting with 33d4ad497a99ed409a19e949816cb34ec7adc099f91bb7bb3dce1a68c665489e not found: ID does not exist" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.636871 4886 scope.go:117] "RemoveContainer" containerID="54ccb0efa97e65aeaf926f3c90aa32a07c822ddc7a245fd385ab1f7c8dc020b0" Feb 19 21:24:13 crc kubenswrapper[4886]: E0219 21:24:13.675415 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54ccb0efa97e65aeaf926f3c90aa32a07c822ddc7a245fd385ab1f7c8dc020b0\": container with ID starting with 54ccb0efa97e65aeaf926f3c90aa32a07c822ddc7a245fd385ab1f7c8dc020b0 not found: ID does not exist" containerID="54ccb0efa97e65aeaf926f3c90aa32a07c822ddc7a245fd385ab1f7c8dc020b0" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.675511 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54ccb0efa97e65aeaf926f3c90aa32a07c822ddc7a245fd385ab1f7c8dc020b0"} err="failed to get container status \"54ccb0efa97e65aeaf926f3c90aa32a07c822ddc7a245fd385ab1f7c8dc020b0\": rpc error: code = NotFound desc = could not find container \"54ccb0efa97e65aeaf926f3c90aa32a07c822ddc7a245fd385ab1f7c8dc020b0\": container with ID starting with 54ccb0efa97e65aeaf926f3c90aa32a07c822ddc7a245fd385ab1f7c8dc020b0 not found: ID does not exist" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.686555 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 21:24:13 crc kubenswrapper[4886]: E0219 21:24:13.687432 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd8e5d10-ef29-4c16-9741-50b48418f573" containerName="nova-api-log" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.687456 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd8e5d10-ef29-4c16-9741-50b48418f573" containerName="nova-api-log" Feb 19 21:24:13 crc kubenswrapper[4886]: E0219 21:24:13.687491 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd8e5d10-ef29-4c16-9741-50b48418f573" containerName="nova-api-api" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.687506 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd8e5d10-ef29-4c16-9741-50b48418f573" containerName="nova-api-api" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.687832 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd8e5d10-ef29-4c16-9741-50b48418f573" containerName="nova-api-log" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.687881 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd8e5d10-ef29-4c16-9741-50b48418f573" containerName="nova-api-api" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.699650 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.701705 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.706759 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.707994 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.708360 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.881653 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e\") " pod="openstack/nova-api-0" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.881707 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e-config-data\") pod \"nova-api-0\" (UID: \"5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e\") " pod="openstack/nova-api-0" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.881730 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkvvr\" (UniqueName: \"kubernetes.io/projected/5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e-kube-api-access-xkvvr\") pod \"nova-api-0\" (UID: \"5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e\") " pod="openstack/nova-api-0" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.881859 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e-logs\") pod \"nova-api-0\" (UID: \"5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e\") " pod="openstack/nova-api-0" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.881886 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e-public-tls-certs\") pod \"nova-api-0\" (UID: \"5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e\") " pod="openstack/nova-api-0" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.881925 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e\") " pod="openstack/nova-api-0" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.984736 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e\") " pod="openstack/nova-api-0" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.984825 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e-config-data\") pod \"nova-api-0\" (UID: \"5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e\") " pod="openstack/nova-api-0" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.984871 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkvvr\" (UniqueName: \"kubernetes.io/projected/5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e-kube-api-access-xkvvr\") pod \"nova-api-0\" (UID: \"5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e\") " pod="openstack/nova-api-0" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.984958 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e-logs\") pod \"nova-api-0\" (UID: \"5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e\") " pod="openstack/nova-api-0" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.985014 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e-public-tls-certs\") pod \"nova-api-0\" (UID: \"5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e\") " pod="openstack/nova-api-0" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.985089 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e\") " pod="openstack/nova-api-0" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.985735 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e-logs\") pod \"nova-api-0\" (UID: \"5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e\") " pod="openstack/nova-api-0" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.989907 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e\") " pod="openstack/nova-api-0" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.990213 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e-public-tls-certs\") pod \"nova-api-0\" (UID: \"5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e\") " pod="openstack/nova-api-0" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.990500 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e-config-data\") pod \"nova-api-0\" (UID: \"5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e\") " pod="openstack/nova-api-0" Feb 19 21:24:13 crc kubenswrapper[4886]: I0219 21:24:13.999365 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e\") " pod="openstack/nova-api-0" Feb 19 21:24:14 crc kubenswrapper[4886]: I0219 21:24:14.004615 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkvvr\" (UniqueName: \"kubernetes.io/projected/5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e-kube-api-access-xkvvr\") pod \"nova-api-0\" (UID: \"5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e\") " pod="openstack/nova-api-0" Feb 19 21:24:14 crc kubenswrapper[4886]: I0219 21:24:14.024227 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 21:24:14 crc kubenswrapper[4886]: I0219 21:24:14.527011 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 21:24:14 crc kubenswrapper[4886]: I0219 21:24:14.582940 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e","Type":"ContainerStarted","Data":"f64feb347af97c68ea46805ee1878b7a260287978c831e07b70bf798dd48d7ed"} Feb 19 21:24:14 crc kubenswrapper[4886]: I0219 21:24:14.627071 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd8e5d10-ef29-4c16-9741-50b48418f573" path="/var/lib/kubelet/pods/dd8e5d10-ef29-4c16-9741-50b48418f573/volumes" Feb 19 21:24:15 crc kubenswrapper[4886]: I0219 21:24:15.628307 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e","Type":"ContainerStarted","Data":"def21cfa7ac3477c1813e67959916095e32e8c189d444917de5cdd19986a16fc"} Feb 19 21:24:15 crc kubenswrapper[4886]: I0219 21:24:15.628698 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e","Type":"ContainerStarted","Data":"ca4329d25777729e625a0c3a62bd4a01d4615a69685c4905f692ef25107595e0"} Feb 19 21:24:15 crc kubenswrapper[4886]: I0219 21:24:15.673890 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.673854082 podStartE2EDuration="2.673854082s" podCreationTimestamp="2026-02-19 21:24:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:24:15.658521024 +0000 UTC m=+1486.286364074" watchObservedRunningTime="2026-02-19 21:24:15.673854082 +0000 UTC m=+1486.301697132" Feb 19 21:24:16 crc kubenswrapper[4886]: I0219 21:24:16.276725 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 21:24:16 crc kubenswrapper[4886]: I0219 21:24:16.277003 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 21:24:16 crc kubenswrapper[4886]: I0219 21:24:16.961076 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 21:24:18 crc kubenswrapper[4886]: I0219 21:24:18.324921 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:24:18 crc kubenswrapper[4886]: I0219 21:24:18.325350 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:24:21 crc kubenswrapper[4886]: I0219 21:24:21.277158 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 21:24:21 crc kubenswrapper[4886]: I0219 21:24:21.277847 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 21:24:21 crc kubenswrapper[4886]: I0219 21:24:21.961635 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 21:24:22 crc kubenswrapper[4886]: I0219 21:24:22.011357 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 21:24:22 crc kubenswrapper[4886]: I0219 21:24:22.290437 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e80b1a73-65bd-4626-a093-7464dbb3cb56" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.2:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 21:24:22 crc kubenswrapper[4886]: I0219 21:24:22.290437 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e80b1a73-65bd-4626-a093-7464dbb3cb56" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.2:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 21:24:22 crc kubenswrapper[4886]: I0219 21:24:22.737592 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 21:24:24 crc kubenswrapper[4886]: I0219 21:24:24.024768 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 21:24:24 crc kubenswrapper[4886]: I0219 21:24:24.025151 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 21:24:25 crc kubenswrapper[4886]: I0219 21:24:25.035439 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.4:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 21:24:25 crc kubenswrapper[4886]: I0219 21:24:25.035495 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5db979d3-f5a1-4ef3-9b28-a9d8d2c71e2e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.4:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 21:24:26 crc kubenswrapper[4886]: I0219 21:24:26.882631 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 21:24:31 crc kubenswrapper[4886]: I0219 21:24:31.282335 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 21:24:31 crc kubenswrapper[4886]: I0219 21:24:31.285039 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 21:24:31 crc kubenswrapper[4886]: I0219 21:24:31.288664 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 21:24:31 crc kubenswrapper[4886]: I0219 21:24:31.566872 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 21:24:31 crc kubenswrapper[4886]: I0219 21:24:31.567393 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="517c324e-c2e0-4775-83fd-c9b811305eb0" containerName="kube-state-metrics" containerID="cri-o://8580b0392770d2d79bff5e52569df9d70f9e320b7657aa1028609e9a43562f9f" gracePeriod=30 Feb 19 21:24:31 crc kubenswrapper[4886]: I0219 21:24:31.656301 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 19 21:24:31 crc kubenswrapper[4886]: I0219 21:24:31.656546 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd" containerName="mysqld-exporter" containerID="cri-o://2622b6ba4ef66855489ea8bf6cb4d8c8c7ff92e63e3c380e8626086fdefda345" gracePeriod=30 Feb 19 21:24:31 crc kubenswrapper[4886]: I0219 21:24:31.843884 4886 generic.go:334] "Generic (PLEG): container finished" podID="517c324e-c2e0-4775-83fd-c9b811305eb0" containerID="8580b0392770d2d79bff5e52569df9d70f9e320b7657aa1028609e9a43562f9f" exitCode=2 Feb 19 21:24:31 crc kubenswrapper[4886]: I0219 21:24:31.843924 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"517c324e-c2e0-4775-83fd-c9b811305eb0","Type":"ContainerDied","Data":"8580b0392770d2d79bff5e52569df9d70f9e320b7657aa1028609e9a43562f9f"} Feb 19 21:24:31 crc kubenswrapper[4886]: I0219 21:24:31.855754 4886 generic.go:334] "Generic (PLEG): container finished" podID="5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd" containerID="2622b6ba4ef66855489ea8bf6cb4d8c8c7ff92e63e3c380e8626086fdefda345" exitCode=2 Feb 19 21:24:31 crc kubenswrapper[4886]: I0219 21:24:31.855948 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd","Type":"ContainerDied","Data":"2622b6ba4ef66855489ea8bf6cb4d8c8c7ff92e63e3c380e8626086fdefda345"} Feb 19 21:24:31 crc kubenswrapper[4886]: I0219 21:24:31.868344 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.356327 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.367231 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.466691 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd-config-data\") pod \"5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd\" (UID: \"5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd\") " Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.466797 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk6sq\" (UniqueName: \"kubernetes.io/projected/5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd-kube-api-access-hk6sq\") pod \"5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd\" (UID: \"5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd\") " Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.466897 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd-combined-ca-bundle\") pod \"5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd\" (UID: \"5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd\") " Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.466985 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2r9n\" (UniqueName: \"kubernetes.io/projected/517c324e-c2e0-4775-83fd-c9b811305eb0-kube-api-access-m2r9n\") pod \"517c324e-c2e0-4775-83fd-c9b811305eb0\" (UID: \"517c324e-c2e0-4775-83fd-c9b811305eb0\") " Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.474469 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd-kube-api-access-hk6sq" (OuterVolumeSpecName: "kube-api-access-hk6sq") pod "5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd" (UID: "5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd"). InnerVolumeSpecName "kube-api-access-hk6sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.475943 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/517c324e-c2e0-4775-83fd-c9b811305eb0-kube-api-access-m2r9n" (OuterVolumeSpecName: "kube-api-access-m2r9n") pod "517c324e-c2e0-4775-83fd-c9b811305eb0" (UID: "517c324e-c2e0-4775-83fd-c9b811305eb0"). InnerVolumeSpecName "kube-api-access-m2r9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.514256 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd" (UID: "5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.533891 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd-config-data" (OuterVolumeSpecName: "config-data") pod "5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd" (UID: "5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.569726 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2r9n\" (UniqueName: \"kubernetes.io/projected/517c324e-c2e0-4775-83fd-c9b811305eb0-kube-api-access-m2r9n\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.569762 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.569771 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk6sq\" (UniqueName: \"kubernetes.io/projected/5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd-kube-api-access-hk6sq\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.569780 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.867644 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.867642 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd","Type":"ContainerDied","Data":"8b5f9309a91b928673b08928277c711abee3387369836cea2682959f1f0f857c"} Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.867807 4886 scope.go:117] "RemoveContainer" containerID="2622b6ba4ef66855489ea8bf6cb4d8c8c7ff92e63e3c380e8626086fdefda345" Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.871602 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.871606 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"517c324e-c2e0-4775-83fd-c9b811305eb0","Type":"ContainerDied","Data":"f610e118982857b1154df7ea5681eb2df4ff08b98bf30930d4749c64611fed25"} Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.907475 4886 scope.go:117] "RemoveContainer" containerID="8580b0392770d2d79bff5e52569df9d70f9e320b7657aa1028609e9a43562f9f" Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.912156 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.933897 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.956022 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.968582 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.986352 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 19 21:24:32 crc kubenswrapper[4886]: E0219 21:24:32.987009 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd" containerName="mysqld-exporter" Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.987035 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd" containerName="mysqld-exporter" Feb 19 21:24:32 crc kubenswrapper[4886]: E0219 21:24:32.987056 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517c324e-c2e0-4775-83fd-c9b811305eb0" containerName="kube-state-metrics" Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.987065 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="517c324e-c2e0-4775-83fd-c9b811305eb0" containerName="kube-state-metrics" Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.987425 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd" containerName="mysqld-exporter" Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.987454 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="517c324e-c2e0-4775-83fd-c9b811305eb0" containerName="kube-state-metrics" Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.988569 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.990648 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 19 21:24:32 crc kubenswrapper[4886]: I0219 21:24:32.990863 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.003496 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.026328 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.028146 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.033917 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.034116 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.045189 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.082027 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb877\" (UniqueName: \"kubernetes.io/projected/ab6edc08-03da-4978-af1f-e8b309e1bc3d-kube-api-access-mb877\") pod \"mysqld-exporter-0\" (UID: \"ab6edc08-03da-4978-af1f-e8b309e1bc3d\") " pod="openstack/mysqld-exporter-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.082113 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab6edc08-03da-4978-af1f-e8b309e1bc3d-config-data\") pod \"mysqld-exporter-0\" (UID: \"ab6edc08-03da-4978-af1f-e8b309e1bc3d\") " pod="openstack/mysqld-exporter-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.082206 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab6edc08-03da-4978-af1f-e8b309e1bc3d-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"ab6edc08-03da-4978-af1f-e8b309e1bc3d\") " pod="openstack/mysqld-exporter-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.082253 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6edc08-03da-4978-af1f-e8b309e1bc3d-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"ab6edc08-03da-4978-af1f-e8b309e1bc3d\") " pod="openstack/mysqld-exporter-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.183809 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt7rh\" (UniqueName: \"kubernetes.io/projected/a36103a2-c43e-47d3-ace0-5a41849c2a86-kube-api-access-zt7rh\") pod \"kube-state-metrics-0\" (UID: \"a36103a2-c43e-47d3-ace0-5a41849c2a86\") " pod="openstack/kube-state-metrics-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.183870 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb877\" (UniqueName: \"kubernetes.io/projected/ab6edc08-03da-4978-af1f-e8b309e1bc3d-kube-api-access-mb877\") pod \"mysqld-exporter-0\" (UID: \"ab6edc08-03da-4978-af1f-e8b309e1bc3d\") " pod="openstack/mysqld-exporter-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.183912 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a36103a2-c43e-47d3-ace0-5a41849c2a86-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a36103a2-c43e-47d3-ace0-5a41849c2a86\") " pod="openstack/kube-state-metrics-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.183931 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab6edc08-03da-4978-af1f-e8b309e1bc3d-config-data\") pod \"mysqld-exporter-0\" (UID: \"ab6edc08-03da-4978-af1f-e8b309e1bc3d\") " pod="openstack/mysqld-exporter-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.183958 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36103a2-c43e-47d3-ace0-5a41849c2a86-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a36103a2-c43e-47d3-ace0-5a41849c2a86\") " pod="openstack/kube-state-metrics-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.184034 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab6edc08-03da-4978-af1f-e8b309e1bc3d-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"ab6edc08-03da-4978-af1f-e8b309e1bc3d\") " pod="openstack/mysqld-exporter-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.184077 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6edc08-03da-4978-af1f-e8b309e1bc3d-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"ab6edc08-03da-4978-af1f-e8b309e1bc3d\") " pod="openstack/mysqld-exporter-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.184128 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a36103a2-c43e-47d3-ace0-5a41849c2a86-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a36103a2-c43e-47d3-ace0-5a41849c2a86\") " pod="openstack/kube-state-metrics-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.189370 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab6edc08-03da-4978-af1f-e8b309e1bc3d-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"ab6edc08-03da-4978-af1f-e8b309e1bc3d\") " pod="openstack/mysqld-exporter-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.194022 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6edc08-03da-4978-af1f-e8b309e1bc3d-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"ab6edc08-03da-4978-af1f-e8b309e1bc3d\") " pod="openstack/mysqld-exporter-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.201429 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb877\" (UniqueName: \"kubernetes.io/projected/ab6edc08-03da-4978-af1f-e8b309e1bc3d-kube-api-access-mb877\") pod \"mysqld-exporter-0\" (UID: \"ab6edc08-03da-4978-af1f-e8b309e1bc3d\") " pod="openstack/mysqld-exporter-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.203173 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab6edc08-03da-4978-af1f-e8b309e1bc3d-config-data\") pod \"mysqld-exporter-0\" (UID: \"ab6edc08-03da-4978-af1f-e8b309e1bc3d\") " pod="openstack/mysqld-exporter-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.286396 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a36103a2-c43e-47d3-ace0-5a41849c2a86-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a36103a2-c43e-47d3-ace0-5a41849c2a86\") " pod="openstack/kube-state-metrics-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.286495 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt7rh\" (UniqueName: \"kubernetes.io/projected/a36103a2-c43e-47d3-ace0-5a41849c2a86-kube-api-access-zt7rh\") pod \"kube-state-metrics-0\" (UID: \"a36103a2-c43e-47d3-ace0-5a41849c2a86\") " pod="openstack/kube-state-metrics-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.286555 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a36103a2-c43e-47d3-ace0-5a41849c2a86-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a36103a2-c43e-47d3-ace0-5a41849c2a86\") " pod="openstack/kube-state-metrics-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.286583 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36103a2-c43e-47d3-ace0-5a41849c2a86-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a36103a2-c43e-47d3-ace0-5a41849c2a86\") " pod="openstack/kube-state-metrics-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.290343 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36103a2-c43e-47d3-ace0-5a41849c2a86-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a36103a2-c43e-47d3-ace0-5a41849c2a86\") " pod="openstack/kube-state-metrics-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.290649 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a36103a2-c43e-47d3-ace0-5a41849c2a86-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a36103a2-c43e-47d3-ace0-5a41849c2a86\") " pod="openstack/kube-state-metrics-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.291560 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a36103a2-c43e-47d3-ace0-5a41849c2a86-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a36103a2-c43e-47d3-ace0-5a41849c2a86\") " pod="openstack/kube-state-metrics-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.309210 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt7rh\" (UniqueName: \"kubernetes.io/projected/a36103a2-c43e-47d3-ace0-5a41849c2a86-kube-api-access-zt7rh\") pod \"kube-state-metrics-0\" (UID: \"a36103a2-c43e-47d3-ace0-5a41849c2a86\") " pod="openstack/kube-state-metrics-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.309810 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.352401 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.838581 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.862570 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 21:24:33 crc kubenswrapper[4886]: I0219 21:24:33.945351 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"ab6edc08-03da-4978-af1f-e8b309e1bc3d","Type":"ContainerStarted","Data":"5e573d83cc118ed4a55b7cfe5751c1c3a95a06276b3aa5fee7f811043e86099b"} Feb 19 21:24:34 crc kubenswrapper[4886]: I0219 21:24:34.000816 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 21:24:34 crc kubenswrapper[4886]: I0219 21:24:34.032933 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 21:24:34 crc kubenswrapper[4886]: I0219 21:24:34.033040 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 21:24:34 crc kubenswrapper[4886]: I0219 21:24:34.034212 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 21:24:34 crc kubenswrapper[4886]: I0219 21:24:34.034247 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 21:24:34 crc kubenswrapper[4886]: I0219 21:24:34.038568 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 21:24:34 crc kubenswrapper[4886]: I0219 21:24:34.047360 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 21:24:34 crc kubenswrapper[4886]: I0219 21:24:34.507797 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:24:34 crc kubenswrapper[4886]: I0219 21:24:34.508371 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0fbe388-9f50-4303-8ba4-5f0d849293d1" containerName="ceilometer-central-agent" containerID="cri-o://31e8598587920070a9fcf9cd0f67a04dbc5b850fe4fec88708b74ecf8b6e65fb" gracePeriod=30 Feb 19 21:24:34 crc kubenswrapper[4886]: I0219 21:24:34.508482 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0fbe388-9f50-4303-8ba4-5f0d849293d1" containerName="proxy-httpd" containerID="cri-o://672433eb3c5dbb15cd84553accd29c9a2ad9841f81db48c81b1121f4f006c40f" gracePeriod=30 Feb 19 21:24:34 crc kubenswrapper[4886]: I0219 21:24:34.508521 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0fbe388-9f50-4303-8ba4-5f0d849293d1" containerName="sg-core" containerID="cri-o://ace6caab6b532b9bbbb5ab321483003aef2bf26cdfac36975465cc7abb85f4c0" gracePeriod=30 Feb 19 21:24:34 crc kubenswrapper[4886]: I0219 21:24:34.508553 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0fbe388-9f50-4303-8ba4-5f0d849293d1" containerName="ceilometer-notification-agent" containerID="cri-o://1ffcec1cf68ce314f23a5c89c45c458dfdd306761d0cb1c797acb89fcf9f5c72" gracePeriod=30 Feb 19 21:24:34 crc kubenswrapper[4886]: I0219 21:24:34.625658 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="517c324e-c2e0-4775-83fd-c9b811305eb0" path="/var/lib/kubelet/pods/517c324e-c2e0-4775-83fd-c9b811305eb0/volumes" Feb 19 21:24:34 crc kubenswrapper[4886]: I0219 21:24:34.626359 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd" path="/var/lib/kubelet/pods/5232d9c6-d6bc-4e29-ba43-b3fb8ccb8fcd/volumes" Feb 19 21:24:34 crc kubenswrapper[4886]: I0219 21:24:34.960709 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"ab6edc08-03da-4978-af1f-e8b309e1bc3d","Type":"ContainerStarted","Data":"3a4daba41f31909aeb488fbcf9ba6fbaec9c12c751d9730f0e5717a1d2277e9c"} Feb 19 21:24:34 crc kubenswrapper[4886]: I0219 21:24:34.966367 4886 generic.go:334] "Generic (PLEG): container finished" podID="d0fbe388-9f50-4303-8ba4-5f0d849293d1" containerID="672433eb3c5dbb15cd84553accd29c9a2ad9841f81db48c81b1121f4f006c40f" exitCode=0 Feb 19 21:24:34 crc kubenswrapper[4886]: I0219 21:24:34.966405 4886 generic.go:334] "Generic (PLEG): container finished" podID="d0fbe388-9f50-4303-8ba4-5f0d849293d1" containerID="ace6caab6b532b9bbbb5ab321483003aef2bf26cdfac36975465cc7abb85f4c0" exitCode=2 Feb 19 21:24:34 crc kubenswrapper[4886]: I0219 21:24:34.966419 4886 generic.go:334] "Generic (PLEG): container finished" podID="d0fbe388-9f50-4303-8ba4-5f0d849293d1" containerID="31e8598587920070a9fcf9cd0f67a04dbc5b850fe4fec88708b74ecf8b6e65fb" exitCode=0 Feb 19 21:24:34 crc kubenswrapper[4886]: I0219 21:24:34.966423 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0fbe388-9f50-4303-8ba4-5f0d849293d1","Type":"ContainerDied","Data":"672433eb3c5dbb15cd84553accd29c9a2ad9841f81db48c81b1121f4f006c40f"} Feb 19 21:24:34 crc kubenswrapper[4886]: I0219 21:24:34.966453 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0fbe388-9f50-4303-8ba4-5f0d849293d1","Type":"ContainerDied","Data":"ace6caab6b532b9bbbb5ab321483003aef2bf26cdfac36975465cc7abb85f4c0"} Feb 19 21:24:34 crc kubenswrapper[4886]: I0219 21:24:34.966468 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0fbe388-9f50-4303-8ba4-5f0d849293d1","Type":"ContainerDied","Data":"31e8598587920070a9fcf9cd0f67a04dbc5b850fe4fec88708b74ecf8b6e65fb"} Feb 19 21:24:34 crc kubenswrapper[4886]: I0219 21:24:34.968375 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a36103a2-c43e-47d3-ace0-5a41849c2a86","Type":"ContainerStarted","Data":"8ef5958aeff553838b5944be52e8cf17a0effde81fca465a19115e934185f8fe"} Feb 19 21:24:34 crc kubenswrapper[4886]: I0219 21:24:34.968442 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a36103a2-c43e-47d3-ace0-5a41849c2a86","Type":"ContainerStarted","Data":"252ba9188f46ea66da7c9c552a8ea17cbbeccb0e851f920dc7572e1f4d703077"} Feb 19 21:24:34 crc kubenswrapper[4886]: I0219 21:24:34.983660 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.436045392 podStartE2EDuration="2.983632666s" podCreationTimestamp="2026-02-19 21:24:32 +0000 UTC" firstStartedPulling="2026-02-19 21:24:33.862374831 +0000 UTC m=+1504.490217881" lastFinishedPulling="2026-02-19 21:24:34.409962105 +0000 UTC m=+1505.037805155" observedRunningTime="2026-02-19 21:24:34.979790362 +0000 UTC m=+1505.607633452" watchObservedRunningTime="2026-02-19 21:24:34.983632666 +0000 UTC m=+1505.611475726" Feb 19 21:24:35 crc kubenswrapper[4886]: I0219 21:24:35.005493 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.554022255 podStartE2EDuration="3.00547252s" podCreationTimestamp="2026-02-19 21:24:32 +0000 UTC" firstStartedPulling="2026-02-19 21:24:33.99000104 +0000 UTC m=+1504.617844100" lastFinishedPulling="2026-02-19 21:24:34.441451315 +0000 UTC m=+1505.069294365" observedRunningTime="2026-02-19 21:24:34.998408817 +0000 UTC m=+1505.626251897" watchObservedRunningTime="2026-02-19 21:24:35.00547252 +0000 UTC m=+1505.633315560" Feb 19 21:24:35 crc kubenswrapper[4886]: I0219 21:24:35.980184 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 21:24:39 crc kubenswrapper[4886]: I0219 21:24:39.590799 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:24:39 crc kubenswrapper[4886]: I0219 21:24:39.674168 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0fbe388-9f50-4303-8ba4-5f0d849293d1-run-httpd\") pod \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " Feb 19 21:24:39 crc kubenswrapper[4886]: I0219 21:24:39.674328 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0fbe388-9f50-4303-8ba4-5f0d849293d1-config-data\") pod \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " Feb 19 21:24:39 crc kubenswrapper[4886]: I0219 21:24:39.674409 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0fbe388-9f50-4303-8ba4-5f0d849293d1-scripts\") pod \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " Feb 19 21:24:39 crc kubenswrapper[4886]: I0219 21:24:39.674607 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0fbe388-9f50-4303-8ba4-5f0d849293d1-log-httpd\") pod \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " Feb 19 21:24:39 crc kubenswrapper[4886]: I0219 21:24:39.674711 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt6xs\" (UniqueName: \"kubernetes.io/projected/d0fbe388-9f50-4303-8ba4-5f0d849293d1-kube-api-access-wt6xs\") pod \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " Feb 19 21:24:39 crc kubenswrapper[4886]: I0219 21:24:39.674782 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0fbe388-9f50-4303-8ba4-5f0d849293d1-combined-ca-bundle\") pod \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " Feb 19 21:24:39 crc kubenswrapper[4886]: I0219 21:24:39.674836 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0fbe388-9f50-4303-8ba4-5f0d849293d1-sg-core-conf-yaml\") pod \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\" (UID: \"d0fbe388-9f50-4303-8ba4-5f0d849293d1\") " Feb 19 21:24:39 crc kubenswrapper[4886]: I0219 21:24:39.674860 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0fbe388-9f50-4303-8ba4-5f0d849293d1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d0fbe388-9f50-4303-8ba4-5f0d849293d1" (UID: "d0fbe388-9f50-4303-8ba4-5f0d849293d1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:24:39 crc kubenswrapper[4886]: I0219 21:24:39.675174 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0fbe388-9f50-4303-8ba4-5f0d849293d1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d0fbe388-9f50-4303-8ba4-5f0d849293d1" (UID: "d0fbe388-9f50-4303-8ba4-5f0d849293d1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:24:39 crc kubenswrapper[4886]: I0219 21:24:39.675966 4886 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0fbe388-9f50-4303-8ba4-5f0d849293d1-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:39 crc kubenswrapper[4886]: I0219 21:24:39.675986 4886 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0fbe388-9f50-4303-8ba4-5f0d849293d1-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:39 crc kubenswrapper[4886]: I0219 21:24:39.682066 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0fbe388-9f50-4303-8ba4-5f0d849293d1-scripts" (OuterVolumeSpecName: "scripts") pod "d0fbe388-9f50-4303-8ba4-5f0d849293d1" (UID: "d0fbe388-9f50-4303-8ba4-5f0d849293d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:24:39 crc kubenswrapper[4886]: I0219 21:24:39.692478 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0fbe388-9f50-4303-8ba4-5f0d849293d1-kube-api-access-wt6xs" (OuterVolumeSpecName: "kube-api-access-wt6xs") pod "d0fbe388-9f50-4303-8ba4-5f0d849293d1" (UID: "d0fbe388-9f50-4303-8ba4-5f0d849293d1"). InnerVolumeSpecName "kube-api-access-wt6xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:24:39 crc kubenswrapper[4886]: I0219 21:24:39.723422 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0fbe388-9f50-4303-8ba4-5f0d849293d1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d0fbe388-9f50-4303-8ba4-5f0d849293d1" (UID: "d0fbe388-9f50-4303-8ba4-5f0d849293d1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:24:39 crc kubenswrapper[4886]: I0219 21:24:39.780295 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt6xs\" (UniqueName: \"kubernetes.io/projected/d0fbe388-9f50-4303-8ba4-5f0d849293d1-kube-api-access-wt6xs\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:39 crc kubenswrapper[4886]: I0219 21:24:39.780331 4886 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0fbe388-9f50-4303-8ba4-5f0d849293d1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:39 crc kubenswrapper[4886]: I0219 21:24:39.780343 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0fbe388-9f50-4303-8ba4-5f0d849293d1-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:39 crc kubenswrapper[4886]: I0219 21:24:39.805590 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0fbe388-9f50-4303-8ba4-5f0d849293d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0fbe388-9f50-4303-8ba4-5f0d849293d1" (UID: "d0fbe388-9f50-4303-8ba4-5f0d849293d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:24:39 crc kubenswrapper[4886]: I0219 21:24:39.819981 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0fbe388-9f50-4303-8ba4-5f0d849293d1-config-data" (OuterVolumeSpecName: "config-data") pod "d0fbe388-9f50-4303-8ba4-5f0d849293d1" (UID: "d0fbe388-9f50-4303-8ba4-5f0d849293d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:24:39 crc kubenswrapper[4886]: I0219 21:24:39.882194 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0fbe388-9f50-4303-8ba4-5f0d849293d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:39 crc kubenswrapper[4886]: I0219 21:24:39.882228 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0fbe388-9f50-4303-8ba4-5f0d849293d1-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.055618 4886 generic.go:334] "Generic (PLEG): container finished" podID="d0fbe388-9f50-4303-8ba4-5f0d849293d1" containerID="1ffcec1cf68ce314f23a5c89c45c458dfdd306761d0cb1c797acb89fcf9f5c72" exitCode=0 Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.055666 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0fbe388-9f50-4303-8ba4-5f0d849293d1","Type":"ContainerDied","Data":"1ffcec1cf68ce314f23a5c89c45c458dfdd306761d0cb1c797acb89fcf9f5c72"} Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.055710 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0fbe388-9f50-4303-8ba4-5f0d849293d1","Type":"ContainerDied","Data":"b39cccd45c7a56949d3e733d50e2f37662bdb535831f2c1f2d6bf2ea871a1d21"} Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.055733 4886 scope.go:117] "RemoveContainer" containerID="672433eb3c5dbb15cd84553accd29c9a2ad9841f81db48c81b1121f4f006c40f" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.056088 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.084577 4886 scope.go:117] "RemoveContainer" containerID="ace6caab6b532b9bbbb5ab321483003aef2bf26cdfac36975465cc7abb85f4c0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.095748 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.112119 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.123634 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:24:40 crc kubenswrapper[4886]: E0219 21:24:40.124305 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0fbe388-9f50-4303-8ba4-5f0d849293d1" containerName="ceilometer-central-agent" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.124380 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0fbe388-9f50-4303-8ba4-5f0d849293d1" containerName="ceilometer-central-agent" Feb 19 21:24:40 crc kubenswrapper[4886]: E0219 21:24:40.124456 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0fbe388-9f50-4303-8ba4-5f0d849293d1" containerName="sg-core" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.124506 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0fbe388-9f50-4303-8ba4-5f0d849293d1" containerName="sg-core" Feb 19 21:24:40 crc kubenswrapper[4886]: E0219 21:24:40.124568 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0fbe388-9f50-4303-8ba4-5f0d849293d1" containerName="proxy-httpd" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.124615 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0fbe388-9f50-4303-8ba4-5f0d849293d1" containerName="proxy-httpd" Feb 19 21:24:40 crc kubenswrapper[4886]: E0219 21:24:40.124680 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0fbe388-9f50-4303-8ba4-5f0d849293d1" containerName="ceilometer-notification-agent" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.124727 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0fbe388-9f50-4303-8ba4-5f0d849293d1" containerName="ceilometer-notification-agent" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.124967 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0fbe388-9f50-4303-8ba4-5f0d849293d1" containerName="ceilometer-notification-agent" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.125035 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0fbe388-9f50-4303-8ba4-5f0d849293d1" containerName="sg-core" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.125096 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0fbe388-9f50-4303-8ba4-5f0d849293d1" containerName="proxy-httpd" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.125166 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0fbe388-9f50-4303-8ba4-5f0d849293d1" containerName="ceilometer-central-agent" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.127347 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.130573 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.130811 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.133481 4886 scope.go:117] "RemoveContainer" containerID="1ffcec1cf68ce314f23a5c89c45c458dfdd306761d0cb1c797acb89fcf9f5c72" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.134041 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.145512 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.186376 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jk7b\" (UniqueName: \"kubernetes.io/projected/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-kube-api-access-6jk7b\") pod \"ceilometer-0\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.186453 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-scripts\") pod \"ceilometer-0\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.186557 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.186597 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.186675 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-run-httpd\") pod \"ceilometer-0\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.186705 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.186745 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-config-data\") pod \"ceilometer-0\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.186823 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-log-httpd\") pod \"ceilometer-0\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.211663 4886 scope.go:117] "RemoveContainer" containerID="31e8598587920070a9fcf9cd0f67a04dbc5b850fe4fec88708b74ecf8b6e65fb" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.240128 4886 scope.go:117] "RemoveContainer" containerID="672433eb3c5dbb15cd84553accd29c9a2ad9841f81db48c81b1121f4f006c40f" Feb 19 21:24:40 crc kubenswrapper[4886]: E0219 21:24:40.240647 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"672433eb3c5dbb15cd84553accd29c9a2ad9841f81db48c81b1121f4f006c40f\": container with ID starting with 672433eb3c5dbb15cd84553accd29c9a2ad9841f81db48c81b1121f4f006c40f not found: ID does not exist" containerID="672433eb3c5dbb15cd84553accd29c9a2ad9841f81db48c81b1121f4f006c40f" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.240675 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"672433eb3c5dbb15cd84553accd29c9a2ad9841f81db48c81b1121f4f006c40f"} err="failed to get container status \"672433eb3c5dbb15cd84553accd29c9a2ad9841f81db48c81b1121f4f006c40f\": rpc error: code = NotFound desc = could not find container \"672433eb3c5dbb15cd84553accd29c9a2ad9841f81db48c81b1121f4f006c40f\": container with ID starting with 672433eb3c5dbb15cd84553accd29c9a2ad9841f81db48c81b1121f4f006c40f not found: ID does not exist" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.240702 4886 scope.go:117] "RemoveContainer" containerID="ace6caab6b532b9bbbb5ab321483003aef2bf26cdfac36975465cc7abb85f4c0" Feb 19 21:24:40 crc kubenswrapper[4886]: E0219 21:24:40.240934 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ace6caab6b532b9bbbb5ab321483003aef2bf26cdfac36975465cc7abb85f4c0\": container with ID starting with ace6caab6b532b9bbbb5ab321483003aef2bf26cdfac36975465cc7abb85f4c0 not found: ID does not exist" containerID="ace6caab6b532b9bbbb5ab321483003aef2bf26cdfac36975465cc7abb85f4c0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.240953 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ace6caab6b532b9bbbb5ab321483003aef2bf26cdfac36975465cc7abb85f4c0"} err="failed to get container status \"ace6caab6b532b9bbbb5ab321483003aef2bf26cdfac36975465cc7abb85f4c0\": rpc error: code = NotFound desc = could not find container \"ace6caab6b532b9bbbb5ab321483003aef2bf26cdfac36975465cc7abb85f4c0\": container with ID starting with ace6caab6b532b9bbbb5ab321483003aef2bf26cdfac36975465cc7abb85f4c0 not found: ID does not exist" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.240966 4886 scope.go:117] "RemoveContainer" containerID="1ffcec1cf68ce314f23a5c89c45c458dfdd306761d0cb1c797acb89fcf9f5c72" Feb 19 21:24:40 crc kubenswrapper[4886]: E0219 21:24:40.241329 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ffcec1cf68ce314f23a5c89c45c458dfdd306761d0cb1c797acb89fcf9f5c72\": container with ID starting with 1ffcec1cf68ce314f23a5c89c45c458dfdd306761d0cb1c797acb89fcf9f5c72 not found: ID does not exist" containerID="1ffcec1cf68ce314f23a5c89c45c458dfdd306761d0cb1c797acb89fcf9f5c72" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.241354 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ffcec1cf68ce314f23a5c89c45c458dfdd306761d0cb1c797acb89fcf9f5c72"} err="failed to get container status \"1ffcec1cf68ce314f23a5c89c45c458dfdd306761d0cb1c797acb89fcf9f5c72\": rpc error: code = NotFound desc = could not find container \"1ffcec1cf68ce314f23a5c89c45c458dfdd306761d0cb1c797acb89fcf9f5c72\": container with ID starting with 1ffcec1cf68ce314f23a5c89c45c458dfdd306761d0cb1c797acb89fcf9f5c72 not found: ID does not exist" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.241368 4886 scope.go:117] "RemoveContainer" containerID="31e8598587920070a9fcf9cd0f67a04dbc5b850fe4fec88708b74ecf8b6e65fb" Feb 19 21:24:40 crc kubenswrapper[4886]: E0219 21:24:40.241887 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31e8598587920070a9fcf9cd0f67a04dbc5b850fe4fec88708b74ecf8b6e65fb\": container with ID starting with 31e8598587920070a9fcf9cd0f67a04dbc5b850fe4fec88708b74ecf8b6e65fb not found: ID does not exist" containerID="31e8598587920070a9fcf9cd0f67a04dbc5b850fe4fec88708b74ecf8b6e65fb" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.241928 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31e8598587920070a9fcf9cd0f67a04dbc5b850fe4fec88708b74ecf8b6e65fb"} err="failed to get container status \"31e8598587920070a9fcf9cd0f67a04dbc5b850fe4fec88708b74ecf8b6e65fb\": rpc error: code = NotFound desc = could not find container \"31e8598587920070a9fcf9cd0f67a04dbc5b850fe4fec88708b74ecf8b6e65fb\": container with ID starting with 31e8598587920070a9fcf9cd0f67a04dbc5b850fe4fec88708b74ecf8b6e65fb not found: ID does not exist" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.288680 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jk7b\" (UniqueName: \"kubernetes.io/projected/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-kube-api-access-6jk7b\") pod \"ceilometer-0\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.288810 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-scripts\") pod \"ceilometer-0\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.288874 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.288895 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.289009 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-run-httpd\") pod \"ceilometer-0\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.289040 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.289073 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-config-data\") pod \"ceilometer-0\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.289130 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-log-httpd\") pod \"ceilometer-0\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.289609 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-log-httpd\") pod \"ceilometer-0\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.289623 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-run-httpd\") pod \"ceilometer-0\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.293015 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-scripts\") pod \"ceilometer-0\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.295126 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-config-data\") pod \"ceilometer-0\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.297848 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.305741 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.307625 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jk7b\" (UniqueName: \"kubernetes.io/projected/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-kube-api-access-6jk7b\") pod \"ceilometer-0\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.310952 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.509380 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:24:40 crc kubenswrapper[4886]: I0219 21:24:40.618717 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0fbe388-9f50-4303-8ba4-5f0d849293d1" path="/var/lib/kubelet/pods/d0fbe388-9f50-4303-8ba4-5f0d849293d1/volumes" Feb 19 21:24:41 crc kubenswrapper[4886]: I0219 21:24:41.080149 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:24:41 crc kubenswrapper[4886]: I0219 21:24:41.095206 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6acc91d-1f4f-4ca8-826c-6b783b0155d2","Type":"ContainerStarted","Data":"465ea151fc38c9d3f0499fcf13156bd409d5014b758ecaf8c790a0d28e691c15"} Feb 19 21:24:42 crc kubenswrapper[4886]: I0219 21:24:42.108170 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6acc91d-1f4f-4ca8-826c-6b783b0155d2","Type":"ContainerStarted","Data":"96f924440bd0ad4e64e4520499629d354f4f3f388192ec3f645a609e383420de"} Feb 19 21:24:43 crc kubenswrapper[4886]: I0219 21:24:43.134153 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6acc91d-1f4f-4ca8-826c-6b783b0155d2","Type":"ContainerStarted","Data":"f98f372e57e8412304aa9955d4396ffd23ad9746dfe0a2b184a86a8b5664a7d4"} Feb 19 21:24:43 crc kubenswrapper[4886]: I0219 21:24:43.375672 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 21:24:44 crc kubenswrapper[4886]: I0219 21:24:44.150965 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6acc91d-1f4f-4ca8-826c-6b783b0155d2","Type":"ContainerStarted","Data":"7f82ead76e30aa6e5219d2e53f2d2d092dd1af1ad90b1b09c3a0b3f2aa226536"} Feb 19 21:24:46 crc kubenswrapper[4886]: I0219 21:24:46.119599 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pv96p"] Feb 19 21:24:46 crc kubenswrapper[4886]: I0219 21:24:46.125234 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pv96p" Feb 19 21:24:46 crc kubenswrapper[4886]: I0219 21:24:46.170959 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pv96p"] Feb 19 21:24:46 crc kubenswrapper[4886]: I0219 21:24:46.195213 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6acc91d-1f4f-4ca8-826c-6b783b0155d2","Type":"ContainerStarted","Data":"054f65763180f67c47c0fdd563bceb73c68e5bb39bac6c92683129521b6915a0"} Feb 19 21:24:46 crc kubenswrapper[4886]: I0219 21:24:46.198168 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 21:24:46 crc kubenswrapper[4886]: I0219 21:24:46.218519 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.839363227 podStartE2EDuration="6.218501933s" podCreationTimestamp="2026-02-19 21:24:40 +0000 UTC" firstStartedPulling="2026-02-19 21:24:41.068232419 +0000 UTC m=+1511.696075469" lastFinishedPulling="2026-02-19 21:24:45.447371115 +0000 UTC m=+1516.075214175" observedRunningTime="2026-02-19 21:24:46.215446228 +0000 UTC m=+1516.843289278" watchObservedRunningTime="2026-02-19 21:24:46.218501933 +0000 UTC m=+1516.846344983" Feb 19 21:24:46 crc kubenswrapper[4886]: I0219 21:24:46.234513 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bqr7\" (UniqueName: \"kubernetes.io/projected/3f0045af-88fe-46e4-a9f2-b4cbacc2eccd-kube-api-access-5bqr7\") pod \"redhat-marketplace-pv96p\" (UID: \"3f0045af-88fe-46e4-a9f2-b4cbacc2eccd\") " pod="openshift-marketplace/redhat-marketplace-pv96p" Feb 19 21:24:46 crc kubenswrapper[4886]: I0219 21:24:46.234673 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0045af-88fe-46e4-a9f2-b4cbacc2eccd-utilities\") pod \"redhat-marketplace-pv96p\" (UID: \"3f0045af-88fe-46e4-a9f2-b4cbacc2eccd\") " pod="openshift-marketplace/redhat-marketplace-pv96p" Feb 19 21:24:46 crc kubenswrapper[4886]: I0219 21:24:46.234728 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0045af-88fe-46e4-a9f2-b4cbacc2eccd-catalog-content\") pod \"redhat-marketplace-pv96p\" (UID: \"3f0045af-88fe-46e4-a9f2-b4cbacc2eccd\") " pod="openshift-marketplace/redhat-marketplace-pv96p" Feb 19 21:24:46 crc kubenswrapper[4886]: I0219 21:24:46.337304 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0045af-88fe-46e4-a9f2-b4cbacc2eccd-utilities\") pod \"redhat-marketplace-pv96p\" (UID: \"3f0045af-88fe-46e4-a9f2-b4cbacc2eccd\") " pod="openshift-marketplace/redhat-marketplace-pv96p" Feb 19 21:24:46 crc kubenswrapper[4886]: I0219 21:24:46.337626 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0045af-88fe-46e4-a9f2-b4cbacc2eccd-catalog-content\") pod \"redhat-marketplace-pv96p\" (UID: \"3f0045af-88fe-46e4-a9f2-b4cbacc2eccd\") " pod="openshift-marketplace/redhat-marketplace-pv96p" Feb 19 21:24:46 crc kubenswrapper[4886]: I0219 21:24:46.337864 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bqr7\" (UniqueName: \"kubernetes.io/projected/3f0045af-88fe-46e4-a9f2-b4cbacc2eccd-kube-api-access-5bqr7\") pod \"redhat-marketplace-pv96p\" (UID: \"3f0045af-88fe-46e4-a9f2-b4cbacc2eccd\") " pod="openshift-marketplace/redhat-marketplace-pv96p" Feb 19 21:24:46 crc kubenswrapper[4886]: I0219 21:24:46.340330 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0045af-88fe-46e4-a9f2-b4cbacc2eccd-utilities\") pod \"redhat-marketplace-pv96p\" (UID: \"3f0045af-88fe-46e4-a9f2-b4cbacc2eccd\") " pod="openshift-marketplace/redhat-marketplace-pv96p" Feb 19 21:24:46 crc kubenswrapper[4886]: I0219 21:24:46.340893 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0045af-88fe-46e4-a9f2-b4cbacc2eccd-catalog-content\") pod \"redhat-marketplace-pv96p\" (UID: \"3f0045af-88fe-46e4-a9f2-b4cbacc2eccd\") " pod="openshift-marketplace/redhat-marketplace-pv96p" Feb 19 21:24:46 crc kubenswrapper[4886]: I0219 21:24:46.361127 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bqr7\" (UniqueName: \"kubernetes.io/projected/3f0045af-88fe-46e4-a9f2-b4cbacc2eccd-kube-api-access-5bqr7\") pod \"redhat-marketplace-pv96p\" (UID: \"3f0045af-88fe-46e4-a9f2-b4cbacc2eccd\") " pod="openshift-marketplace/redhat-marketplace-pv96p" Feb 19 21:24:46 crc kubenswrapper[4886]: I0219 21:24:46.453150 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pv96p" Feb 19 21:24:46 crc kubenswrapper[4886]: I0219 21:24:46.961699 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pv96p"] Feb 19 21:24:46 crc kubenswrapper[4886]: W0219 21:24:46.980507 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f0045af_88fe_46e4_a9f2_b4cbacc2eccd.slice/crio-ff3be53637822bf802f5ed0f2250db701a7adc758c2b88385c5b681be4ee64d7 WatchSource:0}: Error finding container ff3be53637822bf802f5ed0f2250db701a7adc758c2b88385c5b681be4ee64d7: Status 404 returned error can't find the container with id ff3be53637822bf802f5ed0f2250db701a7adc758c2b88385c5b681be4ee64d7 Feb 19 21:24:47 crc kubenswrapper[4886]: I0219 21:24:47.218062 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pv96p" event={"ID":"3f0045af-88fe-46e4-a9f2-b4cbacc2eccd","Type":"ContainerStarted","Data":"c16412fe91cdf502f14039f9ef0c7e1149a335b633e599b439cc5865500227aa"} Feb 19 21:24:47 crc kubenswrapper[4886]: I0219 21:24:47.218452 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pv96p" event={"ID":"3f0045af-88fe-46e4-a9f2-b4cbacc2eccd","Type":"ContainerStarted","Data":"ff3be53637822bf802f5ed0f2250db701a7adc758c2b88385c5b681be4ee64d7"} Feb 19 21:24:48 crc kubenswrapper[4886]: I0219 21:24:48.244834 4886 generic.go:334] "Generic (PLEG): container finished" podID="3f0045af-88fe-46e4-a9f2-b4cbacc2eccd" containerID="c16412fe91cdf502f14039f9ef0c7e1149a335b633e599b439cc5865500227aa" exitCode=0 Feb 19 21:24:48 crc kubenswrapper[4886]: I0219 21:24:48.247032 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pv96p" event={"ID":"3f0045af-88fe-46e4-a9f2-b4cbacc2eccd","Type":"ContainerDied","Data":"c16412fe91cdf502f14039f9ef0c7e1149a335b633e599b439cc5865500227aa"} Feb 19 21:24:48 crc kubenswrapper[4886]: I0219 21:24:48.247107 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pv96p" event={"ID":"3f0045af-88fe-46e4-a9f2-b4cbacc2eccd","Type":"ContainerStarted","Data":"ced73972140aff0c8f63eb8897a8bb504ae2a5d0c03ab5f99e3a0d69f2ae0444"} Feb 19 21:24:48 crc kubenswrapper[4886]: I0219 21:24:48.334417 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:24:48 crc kubenswrapper[4886]: I0219 21:24:48.334478 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:24:48 crc kubenswrapper[4886]: I0219 21:24:48.334522 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 21:24:48 crc kubenswrapper[4886]: I0219 21:24:48.335610 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c"} pod="openshift-machine-config-operator/machine-config-daemon-6stm5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 21:24:48 crc kubenswrapper[4886]: I0219 21:24:48.335658 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" containerID="cri-o://3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" gracePeriod=600 Feb 19 21:24:48 crc kubenswrapper[4886]: E0219 21:24:48.501280 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:24:49 crc kubenswrapper[4886]: I0219 21:24:49.268858 4886 generic.go:334] "Generic (PLEG): container finished" podID="b096c32d-4192-4529-bc55-b05d09004007" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" exitCode=0 Feb 19 21:24:49 crc kubenswrapper[4886]: I0219 21:24:49.270149 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerDied","Data":"3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c"} Feb 19 21:24:49 crc kubenswrapper[4886]: I0219 21:24:49.270206 4886 scope.go:117] "RemoveContainer" containerID="f25290d2bb432baa17ed9b3c0a6cf51aa88d17f019c0cb80644d5e372ee8ab49" Feb 19 21:24:49 crc kubenswrapper[4886]: I0219 21:24:49.270621 4886 scope.go:117] "RemoveContainer" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" Feb 19 21:24:49 crc kubenswrapper[4886]: E0219 21:24:49.270912 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:24:50 crc kubenswrapper[4886]: I0219 21:24:50.292198 4886 generic.go:334] "Generic (PLEG): container finished" podID="3f0045af-88fe-46e4-a9f2-b4cbacc2eccd" containerID="ced73972140aff0c8f63eb8897a8bb504ae2a5d0c03ab5f99e3a0d69f2ae0444" exitCode=0 Feb 19 21:24:50 crc kubenswrapper[4886]: I0219 21:24:50.292229 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pv96p" event={"ID":"3f0045af-88fe-46e4-a9f2-b4cbacc2eccd","Type":"ContainerDied","Data":"ced73972140aff0c8f63eb8897a8bb504ae2a5d0c03ab5f99e3a0d69f2ae0444"} Feb 19 21:24:50 crc kubenswrapper[4886]: I0219 21:24:50.493585 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bbzt5"] Feb 19 21:24:50 crc kubenswrapper[4886]: I0219 21:24:50.496003 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbzt5" Feb 19 21:24:50 crc kubenswrapper[4886]: I0219 21:24:50.508409 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bbzt5"] Feb 19 21:24:50 crc kubenswrapper[4886]: I0219 21:24:50.647357 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3d636c6-a81a-483c-8c15-d3902c962905-catalog-content\") pod \"community-operators-bbzt5\" (UID: \"f3d636c6-a81a-483c-8c15-d3902c962905\") " pod="openshift-marketplace/community-operators-bbzt5" Feb 19 21:24:50 crc kubenswrapper[4886]: I0219 21:24:50.647690 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ptjh\" (UniqueName: \"kubernetes.io/projected/f3d636c6-a81a-483c-8c15-d3902c962905-kube-api-access-6ptjh\") pod \"community-operators-bbzt5\" (UID: \"f3d636c6-a81a-483c-8c15-d3902c962905\") " pod="openshift-marketplace/community-operators-bbzt5" Feb 19 21:24:50 crc kubenswrapper[4886]: I0219 21:24:50.647872 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3d636c6-a81a-483c-8c15-d3902c962905-utilities\") pod \"community-operators-bbzt5\" (UID: \"f3d636c6-a81a-483c-8c15-d3902c962905\") " pod="openshift-marketplace/community-operators-bbzt5" Feb 19 21:24:50 crc kubenswrapper[4886]: I0219 21:24:50.749431 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3d636c6-a81a-483c-8c15-d3902c962905-catalog-content\") pod \"community-operators-bbzt5\" (UID: \"f3d636c6-a81a-483c-8c15-d3902c962905\") " pod="openshift-marketplace/community-operators-bbzt5" Feb 19 21:24:50 crc kubenswrapper[4886]: I0219 21:24:50.749670 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ptjh\" (UniqueName: \"kubernetes.io/projected/f3d636c6-a81a-483c-8c15-d3902c962905-kube-api-access-6ptjh\") pod \"community-operators-bbzt5\" (UID: \"f3d636c6-a81a-483c-8c15-d3902c962905\") " pod="openshift-marketplace/community-operators-bbzt5" Feb 19 21:24:50 crc kubenswrapper[4886]: I0219 21:24:50.749898 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3d636c6-a81a-483c-8c15-d3902c962905-utilities\") pod \"community-operators-bbzt5\" (UID: \"f3d636c6-a81a-483c-8c15-d3902c962905\") " pod="openshift-marketplace/community-operators-bbzt5" Feb 19 21:24:50 crc kubenswrapper[4886]: I0219 21:24:50.749962 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3d636c6-a81a-483c-8c15-d3902c962905-catalog-content\") pod \"community-operators-bbzt5\" (UID: \"f3d636c6-a81a-483c-8c15-d3902c962905\") " pod="openshift-marketplace/community-operators-bbzt5" Feb 19 21:24:50 crc kubenswrapper[4886]: I0219 21:24:50.750254 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3d636c6-a81a-483c-8c15-d3902c962905-utilities\") pod \"community-operators-bbzt5\" (UID: \"f3d636c6-a81a-483c-8c15-d3902c962905\") " pod="openshift-marketplace/community-operators-bbzt5" Feb 19 21:24:50 crc kubenswrapper[4886]: I0219 21:24:50.774042 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ptjh\" (UniqueName: \"kubernetes.io/projected/f3d636c6-a81a-483c-8c15-d3902c962905-kube-api-access-6ptjh\") pod \"community-operators-bbzt5\" (UID: \"f3d636c6-a81a-483c-8c15-d3902c962905\") " pod="openshift-marketplace/community-operators-bbzt5" Feb 19 21:24:50 crc kubenswrapper[4886]: I0219 21:24:50.873814 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbzt5" Feb 19 21:24:51 crc kubenswrapper[4886]: I0219 21:24:51.336068 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pv96p" event={"ID":"3f0045af-88fe-46e4-a9f2-b4cbacc2eccd","Type":"ContainerStarted","Data":"36011327f2e37a2c5f58c37a6b9272797fe6e6f3c10a4eef0ee8eb2d9e4f115a"} Feb 19 21:24:51 crc kubenswrapper[4886]: I0219 21:24:51.366536 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pv96p" podStartSLOduration=1.882735162 podStartE2EDuration="5.366515573s" podCreationTimestamp="2026-02-19 21:24:46 +0000 UTC" firstStartedPulling="2026-02-19 21:24:47.220388762 +0000 UTC m=+1517.848231812" lastFinishedPulling="2026-02-19 21:24:50.704169163 +0000 UTC m=+1521.332012223" observedRunningTime="2026-02-19 21:24:51.359973533 +0000 UTC m=+1521.987816583" watchObservedRunningTime="2026-02-19 21:24:51.366515573 +0000 UTC m=+1521.994358623" Feb 19 21:24:51 crc kubenswrapper[4886]: I0219 21:24:51.395529 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bbzt5"] Feb 19 21:24:52 crc kubenswrapper[4886]: I0219 21:24:52.352757 4886 generic.go:334] "Generic (PLEG): container finished" podID="f3d636c6-a81a-483c-8c15-d3902c962905" containerID="1042166e673bf99a4ae39d4ffedf45657246595dc9f9c5a80da1b94510c7e4bb" exitCode=0 Feb 19 21:24:52 crc kubenswrapper[4886]: I0219 21:24:52.352817 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbzt5" event={"ID":"f3d636c6-a81a-483c-8c15-d3902c962905","Type":"ContainerDied","Data":"1042166e673bf99a4ae39d4ffedf45657246595dc9f9c5a80da1b94510c7e4bb"} Feb 19 21:24:52 crc kubenswrapper[4886]: I0219 21:24:52.353310 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbzt5" event={"ID":"f3d636c6-a81a-483c-8c15-d3902c962905","Type":"ContainerStarted","Data":"7afe31758c695da3ceb052db8e376ebca7d58501d218807bc8c9301d14985c6d"} Feb 19 21:24:53 crc kubenswrapper[4886]: I0219 21:24:53.367603 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbzt5" event={"ID":"f3d636c6-a81a-483c-8c15-d3902c962905","Type":"ContainerStarted","Data":"b77e653feae204d6c234282422b16fa1f177edf9c3c6980c40890b98945751e9"} Feb 19 21:24:56 crc kubenswrapper[4886]: I0219 21:24:56.404407 4886 generic.go:334] "Generic (PLEG): container finished" podID="f3d636c6-a81a-483c-8c15-d3902c962905" containerID="b77e653feae204d6c234282422b16fa1f177edf9c3c6980c40890b98945751e9" exitCode=0 Feb 19 21:24:56 crc kubenswrapper[4886]: I0219 21:24:56.404465 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbzt5" event={"ID":"f3d636c6-a81a-483c-8c15-d3902c962905","Type":"ContainerDied","Data":"b77e653feae204d6c234282422b16fa1f177edf9c3c6980c40890b98945751e9"} Feb 19 21:24:56 crc kubenswrapper[4886]: I0219 21:24:56.454052 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pv96p" Feb 19 21:24:56 crc kubenswrapper[4886]: I0219 21:24:56.454102 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pv96p" Feb 19 21:24:57 crc kubenswrapper[4886]: I0219 21:24:57.417371 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbzt5" event={"ID":"f3d636c6-a81a-483c-8c15-d3902c962905","Type":"ContainerStarted","Data":"903b118f6ee8ae0c5a1072accd9d594ef976f728573bc6a9b795b429f5619575"} Feb 19 21:24:57 crc kubenswrapper[4886]: I0219 21:24:57.451203 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bbzt5" podStartSLOduration=2.82411703 podStartE2EDuration="7.451176016s" podCreationTimestamp="2026-02-19 21:24:50 +0000 UTC" firstStartedPulling="2026-02-19 21:24:52.356108621 +0000 UTC m=+1522.983951711" lastFinishedPulling="2026-02-19 21:24:56.983167637 +0000 UTC m=+1527.611010697" observedRunningTime="2026-02-19 21:24:57.435814631 +0000 UTC m=+1528.063657711" watchObservedRunningTime="2026-02-19 21:24:57.451176016 +0000 UTC m=+1528.079019086" Feb 19 21:24:57 crc kubenswrapper[4886]: I0219 21:24:57.508957 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-pv96p" podUID="3f0045af-88fe-46e4-a9f2-b4cbacc2eccd" containerName="registry-server" probeResult="failure" output=< Feb 19 21:24:57 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 21:24:57 crc kubenswrapper[4886]: > Feb 19 21:25:00 crc kubenswrapper[4886]: I0219 21:25:00.877250 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bbzt5" Feb 19 21:25:00 crc kubenswrapper[4886]: I0219 21:25:00.879902 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bbzt5" Feb 19 21:25:01 crc kubenswrapper[4886]: I0219 21:25:01.601818 4886 scope.go:117] "RemoveContainer" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" Feb 19 21:25:01 crc kubenswrapper[4886]: E0219 21:25:01.602543 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:25:01 crc kubenswrapper[4886]: I0219 21:25:01.936280 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-bbzt5" podUID="f3d636c6-a81a-483c-8c15-d3902c962905" containerName="registry-server" probeResult="failure" output=< Feb 19 21:25:01 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 21:25:01 crc kubenswrapper[4886]: > Feb 19 21:25:06 crc kubenswrapper[4886]: I0219 21:25:06.505522 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pv96p" Feb 19 21:25:06 crc kubenswrapper[4886]: I0219 21:25:06.569058 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pv96p" Feb 19 21:25:06 crc kubenswrapper[4886]: I0219 21:25:06.767209 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pv96p"] Feb 19 21:25:08 crc kubenswrapper[4886]: I0219 21:25:08.570078 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pv96p" podUID="3f0045af-88fe-46e4-a9f2-b4cbacc2eccd" containerName="registry-server" containerID="cri-o://36011327f2e37a2c5f58c37a6b9272797fe6e6f3c10a4eef0ee8eb2d9e4f115a" gracePeriod=2 Feb 19 21:25:09 crc kubenswrapper[4886]: I0219 21:25:09.164407 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pv96p" Feb 19 21:25:09 crc kubenswrapper[4886]: I0219 21:25:09.339175 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0045af-88fe-46e4-a9f2-b4cbacc2eccd-catalog-content\") pod \"3f0045af-88fe-46e4-a9f2-b4cbacc2eccd\" (UID: \"3f0045af-88fe-46e4-a9f2-b4cbacc2eccd\") " Feb 19 21:25:09 crc kubenswrapper[4886]: I0219 21:25:09.339602 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0045af-88fe-46e4-a9f2-b4cbacc2eccd-utilities\") pod \"3f0045af-88fe-46e4-a9f2-b4cbacc2eccd\" (UID: \"3f0045af-88fe-46e4-a9f2-b4cbacc2eccd\") " Feb 19 21:25:09 crc kubenswrapper[4886]: I0219 21:25:09.339889 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bqr7\" (UniqueName: \"kubernetes.io/projected/3f0045af-88fe-46e4-a9f2-b4cbacc2eccd-kube-api-access-5bqr7\") pod \"3f0045af-88fe-46e4-a9f2-b4cbacc2eccd\" (UID: \"3f0045af-88fe-46e4-a9f2-b4cbacc2eccd\") " Feb 19 21:25:09 crc kubenswrapper[4886]: I0219 21:25:09.340310 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f0045af-88fe-46e4-a9f2-b4cbacc2eccd-utilities" (OuterVolumeSpecName: "utilities") pod "3f0045af-88fe-46e4-a9f2-b4cbacc2eccd" (UID: "3f0045af-88fe-46e4-a9f2-b4cbacc2eccd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:25:09 crc kubenswrapper[4886]: I0219 21:25:09.341003 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0045af-88fe-46e4-a9f2-b4cbacc2eccd-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:09 crc kubenswrapper[4886]: I0219 21:25:09.351008 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f0045af-88fe-46e4-a9f2-b4cbacc2eccd-kube-api-access-5bqr7" (OuterVolumeSpecName: "kube-api-access-5bqr7") pod "3f0045af-88fe-46e4-a9f2-b4cbacc2eccd" (UID: "3f0045af-88fe-46e4-a9f2-b4cbacc2eccd"). InnerVolumeSpecName "kube-api-access-5bqr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:25:09 crc kubenswrapper[4886]: I0219 21:25:09.372526 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f0045af-88fe-46e4-a9f2-b4cbacc2eccd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f0045af-88fe-46e4-a9f2-b4cbacc2eccd" (UID: "3f0045af-88fe-46e4-a9f2-b4cbacc2eccd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:25:09 crc kubenswrapper[4886]: I0219 21:25:09.443824 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0045af-88fe-46e4-a9f2-b4cbacc2eccd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:09 crc kubenswrapper[4886]: I0219 21:25:09.443864 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bqr7\" (UniqueName: \"kubernetes.io/projected/3f0045af-88fe-46e4-a9f2-b4cbacc2eccd-kube-api-access-5bqr7\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:09 crc kubenswrapper[4886]: I0219 21:25:09.585662 4886 generic.go:334] "Generic (PLEG): container finished" podID="3f0045af-88fe-46e4-a9f2-b4cbacc2eccd" containerID="36011327f2e37a2c5f58c37a6b9272797fe6e6f3c10a4eef0ee8eb2d9e4f115a" exitCode=0 Feb 19 21:25:09 crc kubenswrapper[4886]: I0219 21:25:09.585709 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pv96p" event={"ID":"3f0045af-88fe-46e4-a9f2-b4cbacc2eccd","Type":"ContainerDied","Data":"36011327f2e37a2c5f58c37a6b9272797fe6e6f3c10a4eef0ee8eb2d9e4f115a"} Feb 19 21:25:09 crc kubenswrapper[4886]: I0219 21:25:09.585739 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pv96p" event={"ID":"3f0045af-88fe-46e4-a9f2-b4cbacc2eccd","Type":"ContainerDied","Data":"ff3be53637822bf802f5ed0f2250db701a7adc758c2b88385c5b681be4ee64d7"} Feb 19 21:25:09 crc kubenswrapper[4886]: I0219 21:25:09.585745 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pv96p" Feb 19 21:25:09 crc kubenswrapper[4886]: I0219 21:25:09.585770 4886 scope.go:117] "RemoveContainer" containerID="36011327f2e37a2c5f58c37a6b9272797fe6e6f3c10a4eef0ee8eb2d9e4f115a" Feb 19 21:25:09 crc kubenswrapper[4886]: I0219 21:25:09.632272 4886 scope.go:117] "RemoveContainer" containerID="ced73972140aff0c8f63eb8897a8bb504ae2a5d0c03ab5f99e3a0d69f2ae0444" Feb 19 21:25:09 crc kubenswrapper[4886]: I0219 21:25:09.640419 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pv96p"] Feb 19 21:25:09 crc kubenswrapper[4886]: I0219 21:25:09.670413 4886 scope.go:117] "RemoveContainer" containerID="c16412fe91cdf502f14039f9ef0c7e1149a335b633e599b439cc5865500227aa" Feb 19 21:25:09 crc kubenswrapper[4886]: I0219 21:25:09.674147 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pv96p"] Feb 19 21:25:09 crc kubenswrapper[4886]: I0219 21:25:09.731780 4886 scope.go:117] "RemoveContainer" containerID="36011327f2e37a2c5f58c37a6b9272797fe6e6f3c10a4eef0ee8eb2d9e4f115a" Feb 19 21:25:09 crc kubenswrapper[4886]: E0219 21:25:09.733031 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36011327f2e37a2c5f58c37a6b9272797fe6e6f3c10a4eef0ee8eb2d9e4f115a\": container with ID starting with 36011327f2e37a2c5f58c37a6b9272797fe6e6f3c10a4eef0ee8eb2d9e4f115a not found: ID does not exist" containerID="36011327f2e37a2c5f58c37a6b9272797fe6e6f3c10a4eef0ee8eb2d9e4f115a" Feb 19 21:25:09 crc kubenswrapper[4886]: I0219 21:25:09.733082 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36011327f2e37a2c5f58c37a6b9272797fe6e6f3c10a4eef0ee8eb2d9e4f115a"} err="failed to get container status \"36011327f2e37a2c5f58c37a6b9272797fe6e6f3c10a4eef0ee8eb2d9e4f115a\": rpc error: code = NotFound desc = could not find container \"36011327f2e37a2c5f58c37a6b9272797fe6e6f3c10a4eef0ee8eb2d9e4f115a\": container with ID starting with 36011327f2e37a2c5f58c37a6b9272797fe6e6f3c10a4eef0ee8eb2d9e4f115a not found: ID does not exist" Feb 19 21:25:09 crc kubenswrapper[4886]: I0219 21:25:09.733116 4886 scope.go:117] "RemoveContainer" containerID="ced73972140aff0c8f63eb8897a8bb504ae2a5d0c03ab5f99e3a0d69f2ae0444" Feb 19 21:25:09 crc kubenswrapper[4886]: E0219 21:25:09.733689 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ced73972140aff0c8f63eb8897a8bb504ae2a5d0c03ab5f99e3a0d69f2ae0444\": container with ID starting with ced73972140aff0c8f63eb8897a8bb504ae2a5d0c03ab5f99e3a0d69f2ae0444 not found: ID does not exist" containerID="ced73972140aff0c8f63eb8897a8bb504ae2a5d0c03ab5f99e3a0d69f2ae0444" Feb 19 21:25:09 crc kubenswrapper[4886]: I0219 21:25:09.733881 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ced73972140aff0c8f63eb8897a8bb504ae2a5d0c03ab5f99e3a0d69f2ae0444"} err="failed to get container status \"ced73972140aff0c8f63eb8897a8bb504ae2a5d0c03ab5f99e3a0d69f2ae0444\": rpc error: code = NotFound desc = could not find container \"ced73972140aff0c8f63eb8897a8bb504ae2a5d0c03ab5f99e3a0d69f2ae0444\": container with ID starting with ced73972140aff0c8f63eb8897a8bb504ae2a5d0c03ab5f99e3a0d69f2ae0444 not found: ID does not exist" Feb 19 21:25:09 crc kubenswrapper[4886]: I0219 21:25:09.734041 4886 scope.go:117] "RemoveContainer" containerID="c16412fe91cdf502f14039f9ef0c7e1149a335b633e599b439cc5865500227aa" Feb 19 21:25:09 crc kubenswrapper[4886]: E0219 21:25:09.734776 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c16412fe91cdf502f14039f9ef0c7e1149a335b633e599b439cc5865500227aa\": container with ID starting with c16412fe91cdf502f14039f9ef0c7e1149a335b633e599b439cc5865500227aa not found: ID does not exist" containerID="c16412fe91cdf502f14039f9ef0c7e1149a335b633e599b439cc5865500227aa" Feb 19 21:25:09 crc kubenswrapper[4886]: I0219 21:25:09.734808 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c16412fe91cdf502f14039f9ef0c7e1149a335b633e599b439cc5865500227aa"} err="failed to get container status \"c16412fe91cdf502f14039f9ef0c7e1149a335b633e599b439cc5865500227aa\": rpc error: code = NotFound desc = could not find container \"c16412fe91cdf502f14039f9ef0c7e1149a335b633e599b439cc5865500227aa\": container with ID starting with c16412fe91cdf502f14039f9ef0c7e1149a335b633e599b439cc5865500227aa not found: ID does not exist" Feb 19 21:25:10 crc kubenswrapper[4886]: I0219 21:25:10.525084 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 21:25:10 crc kubenswrapper[4886]: I0219 21:25:10.642892 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f0045af-88fe-46e4-a9f2-b4cbacc2eccd" path="/var/lib/kubelet/pods/3f0045af-88fe-46e4-a9f2-b4cbacc2eccd/volumes" Feb 19 21:25:10 crc kubenswrapper[4886]: I0219 21:25:10.934395 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bbzt5" Feb 19 21:25:10 crc kubenswrapper[4886]: I0219 21:25:10.983728 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bbzt5" Feb 19 21:25:12 crc kubenswrapper[4886]: I0219 21:25:12.157428 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bbzt5"] Feb 19 21:25:12 crc kubenswrapper[4886]: I0219 21:25:12.642541 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bbzt5" podUID="f3d636c6-a81a-483c-8c15-d3902c962905" containerName="registry-server" containerID="cri-o://903b118f6ee8ae0c5a1072accd9d594ef976f728573bc6a9b795b429f5619575" gracePeriod=2 Feb 19 21:25:13 crc kubenswrapper[4886]: I0219 21:25:13.250387 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbzt5" Feb 19 21:25:13 crc kubenswrapper[4886]: I0219 21:25:13.374990 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3d636c6-a81a-483c-8c15-d3902c962905-utilities\") pod \"f3d636c6-a81a-483c-8c15-d3902c962905\" (UID: \"f3d636c6-a81a-483c-8c15-d3902c962905\") " Feb 19 21:25:13 crc kubenswrapper[4886]: I0219 21:25:13.375165 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3d636c6-a81a-483c-8c15-d3902c962905-catalog-content\") pod \"f3d636c6-a81a-483c-8c15-d3902c962905\" (UID: \"f3d636c6-a81a-483c-8c15-d3902c962905\") " Feb 19 21:25:13 crc kubenswrapper[4886]: I0219 21:25:13.375329 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ptjh\" (UniqueName: \"kubernetes.io/projected/f3d636c6-a81a-483c-8c15-d3902c962905-kube-api-access-6ptjh\") pod \"f3d636c6-a81a-483c-8c15-d3902c962905\" (UID: \"f3d636c6-a81a-483c-8c15-d3902c962905\") " Feb 19 21:25:13 crc kubenswrapper[4886]: I0219 21:25:13.376771 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3d636c6-a81a-483c-8c15-d3902c962905-utilities" (OuterVolumeSpecName: "utilities") pod "f3d636c6-a81a-483c-8c15-d3902c962905" (UID: "f3d636c6-a81a-483c-8c15-d3902c962905"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:25:13 crc kubenswrapper[4886]: I0219 21:25:13.377669 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3d636c6-a81a-483c-8c15-d3902c962905-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:13 crc kubenswrapper[4886]: I0219 21:25:13.395691 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3d636c6-a81a-483c-8c15-d3902c962905-kube-api-access-6ptjh" (OuterVolumeSpecName: "kube-api-access-6ptjh") pod "f3d636c6-a81a-483c-8c15-d3902c962905" (UID: "f3d636c6-a81a-483c-8c15-d3902c962905"). InnerVolumeSpecName "kube-api-access-6ptjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:25:13 crc kubenswrapper[4886]: I0219 21:25:13.436923 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3d636c6-a81a-483c-8c15-d3902c962905-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3d636c6-a81a-483c-8c15-d3902c962905" (UID: "f3d636c6-a81a-483c-8c15-d3902c962905"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:25:13 crc kubenswrapper[4886]: I0219 21:25:13.481314 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3d636c6-a81a-483c-8c15-d3902c962905-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:13 crc kubenswrapper[4886]: I0219 21:25:13.481344 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ptjh\" (UniqueName: \"kubernetes.io/projected/f3d636c6-a81a-483c-8c15-d3902c962905-kube-api-access-6ptjh\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:13 crc kubenswrapper[4886]: I0219 21:25:13.654026 4886 generic.go:334] "Generic (PLEG): container finished" podID="f3d636c6-a81a-483c-8c15-d3902c962905" containerID="903b118f6ee8ae0c5a1072accd9d594ef976f728573bc6a9b795b429f5619575" exitCode=0 Feb 19 21:25:13 crc kubenswrapper[4886]: I0219 21:25:13.654080 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbzt5" event={"ID":"f3d636c6-a81a-483c-8c15-d3902c962905","Type":"ContainerDied","Data":"903b118f6ee8ae0c5a1072accd9d594ef976f728573bc6a9b795b429f5619575"} Feb 19 21:25:13 crc kubenswrapper[4886]: I0219 21:25:13.654106 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbzt5" event={"ID":"f3d636c6-a81a-483c-8c15-d3902c962905","Type":"ContainerDied","Data":"7afe31758c695da3ceb052db8e376ebca7d58501d218807bc8c9301d14985c6d"} Feb 19 21:25:13 crc kubenswrapper[4886]: I0219 21:25:13.654124 4886 scope.go:117] "RemoveContainer" containerID="903b118f6ee8ae0c5a1072accd9d594ef976f728573bc6a9b795b429f5619575" Feb 19 21:25:13 crc kubenswrapper[4886]: I0219 21:25:13.654140 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbzt5" Feb 19 21:25:13 crc kubenswrapper[4886]: I0219 21:25:13.687240 4886 scope.go:117] "RemoveContainer" containerID="b77e653feae204d6c234282422b16fa1f177edf9c3c6980c40890b98945751e9" Feb 19 21:25:13 crc kubenswrapper[4886]: I0219 21:25:13.708058 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bbzt5"] Feb 19 21:25:13 crc kubenswrapper[4886]: I0219 21:25:13.716139 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bbzt5"] Feb 19 21:25:13 crc kubenswrapper[4886]: I0219 21:25:13.724063 4886 scope.go:117] "RemoveContainer" containerID="1042166e673bf99a4ae39d4ffedf45657246595dc9f9c5a80da1b94510c7e4bb" Feb 19 21:25:13 crc kubenswrapper[4886]: I0219 21:25:13.775450 4886 scope.go:117] "RemoveContainer" containerID="903b118f6ee8ae0c5a1072accd9d594ef976f728573bc6a9b795b429f5619575" Feb 19 21:25:13 crc kubenswrapper[4886]: E0219 21:25:13.776119 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"903b118f6ee8ae0c5a1072accd9d594ef976f728573bc6a9b795b429f5619575\": container with ID starting with 903b118f6ee8ae0c5a1072accd9d594ef976f728573bc6a9b795b429f5619575 not found: ID does not exist" containerID="903b118f6ee8ae0c5a1072accd9d594ef976f728573bc6a9b795b429f5619575" Feb 19 21:25:13 crc kubenswrapper[4886]: I0219 21:25:13.776188 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"903b118f6ee8ae0c5a1072accd9d594ef976f728573bc6a9b795b429f5619575"} err="failed to get container status \"903b118f6ee8ae0c5a1072accd9d594ef976f728573bc6a9b795b429f5619575\": rpc error: code = NotFound desc = could not find container \"903b118f6ee8ae0c5a1072accd9d594ef976f728573bc6a9b795b429f5619575\": container with ID starting with 903b118f6ee8ae0c5a1072accd9d594ef976f728573bc6a9b795b429f5619575 not found: ID does not exist" Feb 19 21:25:13 crc kubenswrapper[4886]: I0219 21:25:13.776223 4886 scope.go:117] "RemoveContainer" containerID="b77e653feae204d6c234282422b16fa1f177edf9c3c6980c40890b98945751e9" Feb 19 21:25:13 crc kubenswrapper[4886]: E0219 21:25:13.776878 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b77e653feae204d6c234282422b16fa1f177edf9c3c6980c40890b98945751e9\": container with ID starting with b77e653feae204d6c234282422b16fa1f177edf9c3c6980c40890b98945751e9 not found: ID does not exist" containerID="b77e653feae204d6c234282422b16fa1f177edf9c3c6980c40890b98945751e9" Feb 19 21:25:13 crc kubenswrapper[4886]: I0219 21:25:13.776915 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b77e653feae204d6c234282422b16fa1f177edf9c3c6980c40890b98945751e9"} err="failed to get container status \"b77e653feae204d6c234282422b16fa1f177edf9c3c6980c40890b98945751e9\": rpc error: code = NotFound desc = could not find container \"b77e653feae204d6c234282422b16fa1f177edf9c3c6980c40890b98945751e9\": container with ID starting with b77e653feae204d6c234282422b16fa1f177edf9c3c6980c40890b98945751e9 not found: ID does not exist" Feb 19 21:25:13 crc kubenswrapper[4886]: I0219 21:25:13.776943 4886 scope.go:117] "RemoveContainer" containerID="1042166e673bf99a4ae39d4ffedf45657246595dc9f9c5a80da1b94510c7e4bb" Feb 19 21:25:13 crc kubenswrapper[4886]: E0219 21:25:13.777355 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1042166e673bf99a4ae39d4ffedf45657246595dc9f9c5a80da1b94510c7e4bb\": container with ID starting with 1042166e673bf99a4ae39d4ffedf45657246595dc9f9c5a80da1b94510c7e4bb not found: ID does not exist" containerID="1042166e673bf99a4ae39d4ffedf45657246595dc9f9c5a80da1b94510c7e4bb" Feb 19 21:25:13 crc kubenswrapper[4886]: I0219 21:25:13.777411 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1042166e673bf99a4ae39d4ffedf45657246595dc9f9c5a80da1b94510c7e4bb"} err="failed to get container status \"1042166e673bf99a4ae39d4ffedf45657246595dc9f9c5a80da1b94510c7e4bb\": rpc error: code = NotFound desc = could not find container \"1042166e673bf99a4ae39d4ffedf45657246595dc9f9c5a80da1b94510c7e4bb\": container with ID starting with 1042166e673bf99a4ae39d4ffedf45657246595dc9f9c5a80da1b94510c7e4bb not found: ID does not exist" Feb 19 21:25:14 crc kubenswrapper[4886]: I0219 21:25:14.616404 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3d636c6-a81a-483c-8c15-d3902c962905" path="/var/lib/kubelet/pods/f3d636c6-a81a-483c-8c15-d3902c962905/volumes" Feb 19 21:25:16 crc kubenswrapper[4886]: I0219 21:25:16.603223 4886 scope.go:117] "RemoveContainer" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" Feb 19 21:25:16 crc kubenswrapper[4886]: E0219 21:25:16.604128 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:25:21 crc kubenswrapper[4886]: I0219 21:25:21.637122 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-x27rt"] Feb 19 21:25:21 crc kubenswrapper[4886]: I0219 21:25:21.648127 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-x27rt"] Feb 19 21:25:21 crc kubenswrapper[4886]: I0219 21:25:21.739109 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-xx9l4"] Feb 19 21:25:21 crc kubenswrapper[4886]: E0219 21:25:21.739737 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0045af-88fe-46e4-a9f2-b4cbacc2eccd" containerName="extract-content" Feb 19 21:25:21 crc kubenswrapper[4886]: I0219 21:25:21.739759 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0045af-88fe-46e4-a9f2-b4cbacc2eccd" containerName="extract-content" Feb 19 21:25:21 crc kubenswrapper[4886]: E0219 21:25:21.739781 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0045af-88fe-46e4-a9f2-b4cbacc2eccd" containerName="registry-server" Feb 19 21:25:21 crc kubenswrapper[4886]: I0219 21:25:21.739789 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0045af-88fe-46e4-a9f2-b4cbacc2eccd" containerName="registry-server" Feb 19 21:25:21 crc kubenswrapper[4886]: E0219 21:25:21.739811 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0045af-88fe-46e4-a9f2-b4cbacc2eccd" containerName="extract-utilities" Feb 19 21:25:21 crc kubenswrapper[4886]: I0219 21:25:21.739819 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0045af-88fe-46e4-a9f2-b4cbacc2eccd" containerName="extract-utilities" Feb 19 21:25:21 crc kubenswrapper[4886]: E0219 21:25:21.739839 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d636c6-a81a-483c-8c15-d3902c962905" containerName="extract-content" Feb 19 21:25:21 crc kubenswrapper[4886]: I0219 21:25:21.739846 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d636c6-a81a-483c-8c15-d3902c962905" containerName="extract-content" Feb 19 21:25:21 crc kubenswrapper[4886]: E0219 21:25:21.739873 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d636c6-a81a-483c-8c15-d3902c962905" containerName="registry-server" Feb 19 21:25:21 crc kubenswrapper[4886]: I0219 21:25:21.739882 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d636c6-a81a-483c-8c15-d3902c962905" containerName="registry-server" Feb 19 21:25:21 crc kubenswrapper[4886]: E0219 21:25:21.739901 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d636c6-a81a-483c-8c15-d3902c962905" containerName="extract-utilities" Feb 19 21:25:21 crc kubenswrapper[4886]: I0219 21:25:21.739909 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d636c6-a81a-483c-8c15-d3902c962905" containerName="extract-utilities" Feb 19 21:25:21 crc kubenswrapper[4886]: I0219 21:25:21.740186 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3d636c6-a81a-483c-8c15-d3902c962905" containerName="registry-server" Feb 19 21:25:21 crc kubenswrapper[4886]: I0219 21:25:21.740214 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f0045af-88fe-46e4-a9f2-b4cbacc2eccd" containerName="registry-server" Feb 19 21:25:21 crc kubenswrapper[4886]: I0219 21:25:21.741328 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-xx9l4" Feb 19 21:25:21 crc kubenswrapper[4886]: I0219 21:25:21.755114 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-xx9l4"] Feb 19 21:25:21 crc kubenswrapper[4886]: I0219 21:25:21.814555 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vb8j\" (UniqueName: \"kubernetes.io/projected/489598d7-0933-4271-92d3-26a24263c4bc-kube-api-access-7vb8j\") pod \"heat-db-sync-xx9l4\" (UID: \"489598d7-0933-4271-92d3-26a24263c4bc\") " pod="openstack/heat-db-sync-xx9l4" Feb 19 21:25:21 crc kubenswrapper[4886]: I0219 21:25:21.814863 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/489598d7-0933-4271-92d3-26a24263c4bc-config-data\") pod \"heat-db-sync-xx9l4\" (UID: \"489598d7-0933-4271-92d3-26a24263c4bc\") " pod="openstack/heat-db-sync-xx9l4" Feb 19 21:25:21 crc kubenswrapper[4886]: I0219 21:25:21.814990 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/489598d7-0933-4271-92d3-26a24263c4bc-combined-ca-bundle\") pod \"heat-db-sync-xx9l4\" (UID: \"489598d7-0933-4271-92d3-26a24263c4bc\") " pod="openstack/heat-db-sync-xx9l4" Feb 19 21:25:21 crc kubenswrapper[4886]: I0219 21:25:21.916316 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/489598d7-0933-4271-92d3-26a24263c4bc-combined-ca-bundle\") pod \"heat-db-sync-xx9l4\" (UID: \"489598d7-0933-4271-92d3-26a24263c4bc\") " pod="openstack/heat-db-sync-xx9l4" Feb 19 21:25:21 crc kubenswrapper[4886]: I0219 21:25:21.916508 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vb8j\" (UniqueName: \"kubernetes.io/projected/489598d7-0933-4271-92d3-26a24263c4bc-kube-api-access-7vb8j\") pod \"heat-db-sync-xx9l4\" (UID: \"489598d7-0933-4271-92d3-26a24263c4bc\") " pod="openstack/heat-db-sync-xx9l4" Feb 19 21:25:21 crc kubenswrapper[4886]: I0219 21:25:21.916553 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/489598d7-0933-4271-92d3-26a24263c4bc-config-data\") pod \"heat-db-sync-xx9l4\" (UID: \"489598d7-0933-4271-92d3-26a24263c4bc\") " pod="openstack/heat-db-sync-xx9l4" Feb 19 21:25:21 crc kubenswrapper[4886]: I0219 21:25:21.923649 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/489598d7-0933-4271-92d3-26a24263c4bc-config-data\") pod \"heat-db-sync-xx9l4\" (UID: \"489598d7-0933-4271-92d3-26a24263c4bc\") " pod="openstack/heat-db-sync-xx9l4" Feb 19 21:25:21 crc kubenswrapper[4886]: I0219 21:25:21.924003 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/489598d7-0933-4271-92d3-26a24263c4bc-combined-ca-bundle\") pod \"heat-db-sync-xx9l4\" (UID: \"489598d7-0933-4271-92d3-26a24263c4bc\") " pod="openstack/heat-db-sync-xx9l4" Feb 19 21:25:21 crc kubenswrapper[4886]: I0219 21:25:21.937009 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vb8j\" (UniqueName: \"kubernetes.io/projected/489598d7-0933-4271-92d3-26a24263c4bc-kube-api-access-7vb8j\") pod \"heat-db-sync-xx9l4\" (UID: \"489598d7-0933-4271-92d3-26a24263c4bc\") " pod="openstack/heat-db-sync-xx9l4" Feb 19 21:25:21 crc kubenswrapper[4886]: I0219 21:25:21.977953 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rlhmw"] Feb 19 21:25:22 crc kubenswrapper[4886]: I0219 21:25:22.000232 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlhmw" Feb 19 21:25:22 crc kubenswrapper[4886]: I0219 21:25:22.005318 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rlhmw"] Feb 19 21:25:22 crc kubenswrapper[4886]: I0219 21:25:22.071611 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-xx9l4" Feb 19 21:25:22 crc kubenswrapper[4886]: I0219 21:25:22.126956 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v8c9\" (UniqueName: \"kubernetes.io/projected/059c5dd0-de69-4d0b-843f-d8cc11c71d32-kube-api-access-5v8c9\") pod \"certified-operators-rlhmw\" (UID: \"059c5dd0-de69-4d0b-843f-d8cc11c71d32\") " pod="openshift-marketplace/certified-operators-rlhmw" Feb 19 21:25:22 crc kubenswrapper[4886]: I0219 21:25:22.127195 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059c5dd0-de69-4d0b-843f-d8cc11c71d32-catalog-content\") pod \"certified-operators-rlhmw\" (UID: \"059c5dd0-de69-4d0b-843f-d8cc11c71d32\") " pod="openshift-marketplace/certified-operators-rlhmw" Feb 19 21:25:22 crc kubenswrapper[4886]: I0219 21:25:22.127254 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059c5dd0-de69-4d0b-843f-d8cc11c71d32-utilities\") pod \"certified-operators-rlhmw\" (UID: \"059c5dd0-de69-4d0b-843f-d8cc11c71d32\") " pod="openshift-marketplace/certified-operators-rlhmw" Feb 19 21:25:22 crc kubenswrapper[4886]: I0219 21:25:22.230061 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v8c9\" (UniqueName: \"kubernetes.io/projected/059c5dd0-de69-4d0b-843f-d8cc11c71d32-kube-api-access-5v8c9\") pod \"certified-operators-rlhmw\" (UID: \"059c5dd0-de69-4d0b-843f-d8cc11c71d32\") " pod="openshift-marketplace/certified-operators-rlhmw" Feb 19 21:25:22 crc kubenswrapper[4886]: I0219 21:25:22.230113 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059c5dd0-de69-4d0b-843f-d8cc11c71d32-catalog-content\") pod \"certified-operators-rlhmw\" (UID: \"059c5dd0-de69-4d0b-843f-d8cc11c71d32\") " pod="openshift-marketplace/certified-operators-rlhmw" Feb 19 21:25:22 crc kubenswrapper[4886]: I0219 21:25:22.230183 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059c5dd0-de69-4d0b-843f-d8cc11c71d32-utilities\") pod \"certified-operators-rlhmw\" (UID: \"059c5dd0-de69-4d0b-843f-d8cc11c71d32\") " pod="openshift-marketplace/certified-operators-rlhmw" Feb 19 21:25:22 crc kubenswrapper[4886]: I0219 21:25:22.231145 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059c5dd0-de69-4d0b-843f-d8cc11c71d32-utilities\") pod \"certified-operators-rlhmw\" (UID: \"059c5dd0-de69-4d0b-843f-d8cc11c71d32\") " pod="openshift-marketplace/certified-operators-rlhmw" Feb 19 21:25:22 crc kubenswrapper[4886]: I0219 21:25:22.231723 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059c5dd0-de69-4d0b-843f-d8cc11c71d32-catalog-content\") pod \"certified-operators-rlhmw\" (UID: \"059c5dd0-de69-4d0b-843f-d8cc11c71d32\") " pod="openshift-marketplace/certified-operators-rlhmw" Feb 19 21:25:22 crc kubenswrapper[4886]: I0219 21:25:22.316040 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v8c9\" (UniqueName: \"kubernetes.io/projected/059c5dd0-de69-4d0b-843f-d8cc11c71d32-kube-api-access-5v8c9\") pod \"certified-operators-rlhmw\" (UID: \"059c5dd0-de69-4d0b-843f-d8cc11c71d32\") " pod="openshift-marketplace/certified-operators-rlhmw" Feb 19 21:25:22 crc kubenswrapper[4886]: I0219 21:25:22.368773 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlhmw" Feb 19 21:25:22 crc kubenswrapper[4886]: I0219 21:25:22.625128 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="336b4fc8-890f-4ace-baa3-587ebc3b27db" path="/var/lib/kubelet/pods/336b4fc8-890f-4ace-baa3-587ebc3b27db/volumes" Feb 19 21:25:22 crc kubenswrapper[4886]: I0219 21:25:22.626120 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-xx9l4"] Feb 19 21:25:22 crc kubenswrapper[4886]: I0219 21:25:22.775718 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-xx9l4" event={"ID":"489598d7-0933-4271-92d3-26a24263c4bc","Type":"ContainerStarted","Data":"9a62968bcb94e367162710d115fdc8e6f9a2e9943771fe2158570ab5e2a3aefd"} Feb 19 21:25:22 crc kubenswrapper[4886]: I0219 21:25:22.943389 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rlhmw"] Feb 19 21:25:23 crc kubenswrapper[4886]: I0219 21:25:23.780965 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 19 21:25:23 crc kubenswrapper[4886]: I0219 21:25:23.826064 4886 generic.go:334] "Generic (PLEG): container finished" podID="059c5dd0-de69-4d0b-843f-d8cc11c71d32" containerID="0ed02cdc35717f60c7fa03a8fb309c0b4840132776e739cb32902563078e900d" exitCode=0 Feb 19 21:25:23 crc kubenswrapper[4886]: I0219 21:25:23.826105 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlhmw" event={"ID":"059c5dd0-de69-4d0b-843f-d8cc11c71d32","Type":"ContainerDied","Data":"0ed02cdc35717f60c7fa03a8fb309c0b4840132776e739cb32902563078e900d"} Feb 19 21:25:23 crc kubenswrapper[4886]: I0219 21:25:23.826129 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlhmw" event={"ID":"059c5dd0-de69-4d0b-843f-d8cc11c71d32","Type":"ContainerStarted","Data":"e52a9512bd11838d27046450dd6cc7093caa95cf4cc983b87504fe81dac59a2c"} Feb 19 21:25:24 crc kubenswrapper[4886]: I0219 21:25:24.279777 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:25:24 crc kubenswrapper[4886]: I0219 21:25:24.280116 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6acc91d-1f4f-4ca8-826c-6b783b0155d2" containerName="ceilometer-central-agent" containerID="cri-o://96f924440bd0ad4e64e4520499629d354f4f3f388192ec3f645a609e383420de" gracePeriod=30 Feb 19 21:25:24 crc kubenswrapper[4886]: I0219 21:25:24.280254 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6acc91d-1f4f-4ca8-826c-6b783b0155d2" containerName="proxy-httpd" containerID="cri-o://054f65763180f67c47c0fdd563bceb73c68e5bb39bac6c92683129521b6915a0" gracePeriod=30 Feb 19 21:25:24 crc kubenswrapper[4886]: I0219 21:25:24.280333 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6acc91d-1f4f-4ca8-826c-6b783b0155d2" containerName="sg-core" containerID="cri-o://7f82ead76e30aa6e5219d2e53f2d2d092dd1af1ad90b1b09c3a0b3f2aa226536" gracePeriod=30 Feb 19 21:25:24 crc kubenswrapper[4886]: I0219 21:25:24.280416 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6acc91d-1f4f-4ca8-826c-6b783b0155d2" containerName="ceilometer-notification-agent" containerID="cri-o://f98f372e57e8412304aa9955d4396ffd23ad9746dfe0a2b184a86a8b5664a7d4" gracePeriod=30 Feb 19 21:25:24 crc kubenswrapper[4886]: I0219 21:25:24.884720 4886 generic.go:334] "Generic (PLEG): container finished" podID="c6acc91d-1f4f-4ca8-826c-6b783b0155d2" containerID="054f65763180f67c47c0fdd563bceb73c68e5bb39bac6c92683129521b6915a0" exitCode=0 Feb 19 21:25:24 crc kubenswrapper[4886]: I0219 21:25:24.885231 4886 generic.go:334] "Generic (PLEG): container finished" podID="c6acc91d-1f4f-4ca8-826c-6b783b0155d2" containerID="7f82ead76e30aa6e5219d2e53f2d2d092dd1af1ad90b1b09c3a0b3f2aa226536" exitCode=2 Feb 19 21:25:24 crc kubenswrapper[4886]: I0219 21:25:24.884817 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6acc91d-1f4f-4ca8-826c-6b783b0155d2","Type":"ContainerDied","Data":"054f65763180f67c47c0fdd563bceb73c68e5bb39bac6c92683129521b6915a0"} Feb 19 21:25:24 crc kubenswrapper[4886]: I0219 21:25:24.885344 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6acc91d-1f4f-4ca8-826c-6b783b0155d2","Type":"ContainerDied","Data":"7f82ead76e30aa6e5219d2e53f2d2d092dd1af1ad90b1b09c3a0b3f2aa226536"} Feb 19 21:25:24 crc kubenswrapper[4886]: I0219 21:25:24.894522 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlhmw" event={"ID":"059c5dd0-de69-4d0b-843f-d8cc11c71d32","Type":"ContainerStarted","Data":"d5c5a70c3534360aad0a2a686fa17bc6d8762df8144c048cf3ad632725a85086"} Feb 19 21:25:25 crc kubenswrapper[4886]: I0219 21:25:25.266762 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 21:25:25 crc kubenswrapper[4886]: I0219 21:25:25.907836 4886 generic.go:334] "Generic (PLEG): container finished" podID="c6acc91d-1f4f-4ca8-826c-6b783b0155d2" containerID="96f924440bd0ad4e64e4520499629d354f4f3f388192ec3f645a609e383420de" exitCode=0 Feb 19 21:25:25 crc kubenswrapper[4886]: I0219 21:25:25.907884 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6acc91d-1f4f-4ca8-826c-6b783b0155d2","Type":"ContainerDied","Data":"96f924440bd0ad4e64e4520499629d354f4f3f388192ec3f645a609e383420de"} Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.653193 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.774281 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jk7b\" (UniqueName: \"kubernetes.io/projected/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-kube-api-access-6jk7b\") pod \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.774388 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-sg-core-conf-yaml\") pod \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.774469 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-combined-ca-bundle\") pod \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.774532 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-ceilometer-tls-certs\") pod \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.775184 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-config-data\") pod \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.775242 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-scripts\") pod \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.775349 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-log-httpd\") pod \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.775370 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-run-httpd\") pod \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\" (UID: \"c6acc91d-1f4f-4ca8-826c-6b783b0155d2\") " Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.775784 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c6acc91d-1f4f-4ca8-826c-6b783b0155d2" (UID: "c6acc91d-1f4f-4ca8-826c-6b783b0155d2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.776089 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c6acc91d-1f4f-4ca8-826c-6b783b0155d2" (UID: "c6acc91d-1f4f-4ca8-826c-6b783b0155d2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.777560 4886 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.777584 4886 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.782472 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-scripts" (OuterVolumeSpecName: "scripts") pod "c6acc91d-1f4f-4ca8-826c-6b783b0155d2" (UID: "c6acc91d-1f4f-4ca8-826c-6b783b0155d2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.827833 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c6acc91d-1f4f-4ca8-826c-6b783b0155d2" (UID: "c6acc91d-1f4f-4ca8-826c-6b783b0155d2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.830634 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-kube-api-access-6jk7b" (OuterVolumeSpecName: "kube-api-access-6jk7b") pod "c6acc91d-1f4f-4ca8-826c-6b783b0155d2" (UID: "c6acc91d-1f4f-4ca8-826c-6b783b0155d2"). InnerVolumeSpecName "kube-api-access-6jk7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.882400 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.882426 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jk7b\" (UniqueName: \"kubernetes.io/projected/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-kube-api-access-6jk7b\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.882436 4886 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.891445 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c6acc91d-1f4f-4ca8-826c-6b783b0155d2" (UID: "c6acc91d-1f4f-4ca8-826c-6b783b0155d2"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.936015 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6acc91d-1f4f-4ca8-826c-6b783b0155d2" (UID: "c6acc91d-1f4f-4ca8-826c-6b783b0155d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.950412 4886 generic.go:334] "Generic (PLEG): container finished" podID="059c5dd0-de69-4d0b-843f-d8cc11c71d32" containerID="d5c5a70c3534360aad0a2a686fa17bc6d8762df8144c048cf3ad632725a85086" exitCode=0 Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.950454 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlhmw" event={"ID":"059c5dd0-de69-4d0b-843f-d8cc11c71d32","Type":"ContainerDied","Data":"d5c5a70c3534360aad0a2a686fa17bc6d8762df8144c048cf3ad632725a85086"} Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.958454 4886 generic.go:334] "Generic (PLEG): container finished" podID="c6acc91d-1f4f-4ca8-826c-6b783b0155d2" containerID="f98f372e57e8412304aa9955d4396ffd23ad9746dfe0a2b184a86a8b5664a7d4" exitCode=0 Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.958488 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6acc91d-1f4f-4ca8-826c-6b783b0155d2","Type":"ContainerDied","Data":"f98f372e57e8412304aa9955d4396ffd23ad9746dfe0a2b184a86a8b5664a7d4"} Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.958514 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6acc91d-1f4f-4ca8-826c-6b783b0155d2","Type":"ContainerDied","Data":"465ea151fc38c9d3f0499fcf13156bd409d5014b758ecaf8c790a0d28e691c15"} Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.958529 4886 scope.go:117] "RemoveContainer" containerID="054f65763180f67c47c0fdd563bceb73c68e5bb39bac6c92683129521b6915a0" Feb 19 21:25:26 crc kubenswrapper[4886]: I0219 21:25:26.958659 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.002650 4886 scope.go:117] "RemoveContainer" containerID="7f82ead76e30aa6e5219d2e53f2d2d092dd1af1ad90b1b09c3a0b3f2aa226536" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.003032 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.003064 4886 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.006069 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-config-data" (OuterVolumeSpecName: "config-data") pod "c6acc91d-1f4f-4ca8-826c-6b783b0155d2" (UID: "c6acc91d-1f4f-4ca8-826c-6b783b0155d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.104964 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6acc91d-1f4f-4ca8-826c-6b783b0155d2-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.165077 4886 scope.go:117] "RemoveContainer" containerID="f98f372e57e8412304aa9955d4396ffd23ad9746dfe0a2b184a86a8b5664a7d4" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.199279 4886 scope.go:117] "RemoveContainer" containerID="96f924440bd0ad4e64e4520499629d354f4f3f388192ec3f645a609e383420de" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.232517 4886 scope.go:117] "RemoveContainer" containerID="054f65763180f67c47c0fdd563bceb73c68e5bb39bac6c92683129521b6915a0" Feb 19 21:25:27 crc kubenswrapper[4886]: E0219 21:25:27.233115 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"054f65763180f67c47c0fdd563bceb73c68e5bb39bac6c92683129521b6915a0\": container with ID starting with 054f65763180f67c47c0fdd563bceb73c68e5bb39bac6c92683129521b6915a0 not found: ID does not exist" containerID="054f65763180f67c47c0fdd563bceb73c68e5bb39bac6c92683129521b6915a0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.233162 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"054f65763180f67c47c0fdd563bceb73c68e5bb39bac6c92683129521b6915a0"} err="failed to get container status \"054f65763180f67c47c0fdd563bceb73c68e5bb39bac6c92683129521b6915a0\": rpc error: code = NotFound desc = could not find container \"054f65763180f67c47c0fdd563bceb73c68e5bb39bac6c92683129521b6915a0\": container with ID starting with 054f65763180f67c47c0fdd563bceb73c68e5bb39bac6c92683129521b6915a0 not found: ID does not exist" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.233191 4886 scope.go:117] "RemoveContainer" containerID="7f82ead76e30aa6e5219d2e53f2d2d092dd1af1ad90b1b09c3a0b3f2aa226536" Feb 19 21:25:27 crc kubenswrapper[4886]: E0219 21:25:27.234439 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f82ead76e30aa6e5219d2e53f2d2d092dd1af1ad90b1b09c3a0b3f2aa226536\": container with ID starting with 7f82ead76e30aa6e5219d2e53f2d2d092dd1af1ad90b1b09c3a0b3f2aa226536 not found: ID does not exist" containerID="7f82ead76e30aa6e5219d2e53f2d2d092dd1af1ad90b1b09c3a0b3f2aa226536" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.234468 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f82ead76e30aa6e5219d2e53f2d2d092dd1af1ad90b1b09c3a0b3f2aa226536"} err="failed to get container status \"7f82ead76e30aa6e5219d2e53f2d2d092dd1af1ad90b1b09c3a0b3f2aa226536\": rpc error: code = NotFound desc = could not find container \"7f82ead76e30aa6e5219d2e53f2d2d092dd1af1ad90b1b09c3a0b3f2aa226536\": container with ID starting with 7f82ead76e30aa6e5219d2e53f2d2d092dd1af1ad90b1b09c3a0b3f2aa226536 not found: ID does not exist" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.234490 4886 scope.go:117] "RemoveContainer" containerID="f98f372e57e8412304aa9955d4396ffd23ad9746dfe0a2b184a86a8b5664a7d4" Feb 19 21:25:27 crc kubenswrapper[4886]: E0219 21:25:27.236272 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f98f372e57e8412304aa9955d4396ffd23ad9746dfe0a2b184a86a8b5664a7d4\": container with ID starting with f98f372e57e8412304aa9955d4396ffd23ad9746dfe0a2b184a86a8b5664a7d4 not found: ID does not exist" containerID="f98f372e57e8412304aa9955d4396ffd23ad9746dfe0a2b184a86a8b5664a7d4" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.236295 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98f372e57e8412304aa9955d4396ffd23ad9746dfe0a2b184a86a8b5664a7d4"} err="failed to get container status \"f98f372e57e8412304aa9955d4396ffd23ad9746dfe0a2b184a86a8b5664a7d4\": rpc error: code = NotFound desc = could not find container \"f98f372e57e8412304aa9955d4396ffd23ad9746dfe0a2b184a86a8b5664a7d4\": container with ID starting with f98f372e57e8412304aa9955d4396ffd23ad9746dfe0a2b184a86a8b5664a7d4 not found: ID does not exist" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.236309 4886 scope.go:117] "RemoveContainer" containerID="96f924440bd0ad4e64e4520499629d354f4f3f388192ec3f645a609e383420de" Feb 19 21:25:27 crc kubenswrapper[4886]: E0219 21:25:27.236697 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96f924440bd0ad4e64e4520499629d354f4f3f388192ec3f645a609e383420de\": container with ID starting with 96f924440bd0ad4e64e4520499629d354f4f3f388192ec3f645a609e383420de not found: ID does not exist" containerID="96f924440bd0ad4e64e4520499629d354f4f3f388192ec3f645a609e383420de" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.236716 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f924440bd0ad4e64e4520499629d354f4f3f388192ec3f645a609e383420de"} err="failed to get container status \"96f924440bd0ad4e64e4520499629d354f4f3f388192ec3f645a609e383420de\": rpc error: code = NotFound desc = could not find container \"96f924440bd0ad4e64e4520499629d354f4f3f388192ec3f645a609e383420de\": container with ID starting with 96f924440bd0ad4e64e4520499629d354f4f3f388192ec3f645a609e383420de not found: ID does not exist" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.295998 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.323890 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.346662 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:25:27 crc kubenswrapper[4886]: E0219 21:25:27.347240 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6acc91d-1f4f-4ca8-826c-6b783b0155d2" containerName="ceilometer-notification-agent" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.347278 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6acc91d-1f4f-4ca8-826c-6b783b0155d2" containerName="ceilometer-notification-agent" Feb 19 21:25:27 crc kubenswrapper[4886]: E0219 21:25:27.347294 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6acc91d-1f4f-4ca8-826c-6b783b0155d2" containerName="proxy-httpd" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.347304 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6acc91d-1f4f-4ca8-826c-6b783b0155d2" containerName="proxy-httpd" Feb 19 21:25:27 crc kubenswrapper[4886]: E0219 21:25:27.347321 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6acc91d-1f4f-4ca8-826c-6b783b0155d2" containerName="sg-core" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.347329 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6acc91d-1f4f-4ca8-826c-6b783b0155d2" containerName="sg-core" Feb 19 21:25:27 crc kubenswrapper[4886]: E0219 21:25:27.347380 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6acc91d-1f4f-4ca8-826c-6b783b0155d2" containerName="ceilometer-central-agent" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.347389 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6acc91d-1f4f-4ca8-826c-6b783b0155d2" containerName="ceilometer-central-agent" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.347721 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6acc91d-1f4f-4ca8-826c-6b783b0155d2" containerName="ceilometer-central-agent" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.347748 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6acc91d-1f4f-4ca8-826c-6b783b0155d2" containerName="sg-core" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.347769 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6acc91d-1f4f-4ca8-826c-6b783b0155d2" containerName="proxy-httpd" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.347783 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6acc91d-1f4f-4ca8-826c-6b783b0155d2" containerName="ceilometer-notification-agent" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.350791 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.353790 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.353848 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.353789 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.369056 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.513175 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e98987f-2584-4d4c-ae5e-7fd6bdb947d5-run-httpd\") pod \"ceilometer-0\" (UID: \"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5\") " pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.513280 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e98987f-2584-4d4c-ae5e-7fd6bdb947d5-log-httpd\") pod \"ceilometer-0\" (UID: \"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5\") " pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.513316 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8857\" (UniqueName: \"kubernetes.io/projected/1e98987f-2584-4d4c-ae5e-7fd6bdb947d5-kube-api-access-h8857\") pod \"ceilometer-0\" (UID: \"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5\") " pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.513366 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e98987f-2584-4d4c-ae5e-7fd6bdb947d5-scripts\") pod \"ceilometer-0\" (UID: \"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5\") " pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.513410 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e98987f-2584-4d4c-ae5e-7fd6bdb947d5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5\") " pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.513471 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e98987f-2584-4d4c-ae5e-7fd6bdb947d5-config-data\") pod \"ceilometer-0\" (UID: \"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5\") " pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.513511 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e98987f-2584-4d4c-ae5e-7fd6bdb947d5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5\") " pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.513604 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e98987f-2584-4d4c-ae5e-7fd6bdb947d5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5\") " pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.615085 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e98987f-2584-4d4c-ae5e-7fd6bdb947d5-log-httpd\") pod \"ceilometer-0\" (UID: \"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5\") " pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.615396 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8857\" (UniqueName: \"kubernetes.io/projected/1e98987f-2584-4d4c-ae5e-7fd6bdb947d5-kube-api-access-h8857\") pod \"ceilometer-0\" (UID: \"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5\") " pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.615445 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e98987f-2584-4d4c-ae5e-7fd6bdb947d5-scripts\") pod \"ceilometer-0\" (UID: \"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5\") " pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.615478 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e98987f-2584-4d4c-ae5e-7fd6bdb947d5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5\") " pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.615495 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e98987f-2584-4d4c-ae5e-7fd6bdb947d5-log-httpd\") pod \"ceilometer-0\" (UID: \"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5\") " pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.615512 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e98987f-2584-4d4c-ae5e-7fd6bdb947d5-config-data\") pod \"ceilometer-0\" (UID: \"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5\") " pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.615564 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e98987f-2584-4d4c-ae5e-7fd6bdb947d5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5\") " pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.615641 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e98987f-2584-4d4c-ae5e-7fd6bdb947d5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5\") " pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.615705 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e98987f-2584-4d4c-ae5e-7fd6bdb947d5-run-httpd\") pod \"ceilometer-0\" (UID: \"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5\") " pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.616082 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e98987f-2584-4d4c-ae5e-7fd6bdb947d5-run-httpd\") pod \"ceilometer-0\" (UID: \"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5\") " pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.621058 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e98987f-2584-4d4c-ae5e-7fd6bdb947d5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5\") " pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.634136 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e98987f-2584-4d4c-ae5e-7fd6bdb947d5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5\") " pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.635169 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e98987f-2584-4d4c-ae5e-7fd6bdb947d5-scripts\") pod \"ceilometer-0\" (UID: \"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5\") " pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.640359 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e98987f-2584-4d4c-ae5e-7fd6bdb947d5-config-data\") pod \"ceilometer-0\" (UID: \"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5\") " pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.640992 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e98987f-2584-4d4c-ae5e-7fd6bdb947d5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5\") " pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.644636 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8857\" (UniqueName: \"kubernetes.io/projected/1e98987f-2584-4d4c-ae5e-7fd6bdb947d5-kube-api-access-h8857\") pod \"ceilometer-0\" (UID: \"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5\") " pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.680312 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 21:25:27 crc kubenswrapper[4886]: I0219 21:25:27.997327 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlhmw" event={"ID":"059c5dd0-de69-4d0b-843f-d8cc11c71d32","Type":"ContainerStarted","Data":"7f6316ac3e91d81d578eb8edb8979b82bbf6909c33ab45cb7360f3ed70e0f9da"} Feb 19 21:25:28 crc kubenswrapper[4886]: I0219 21:25:28.039229 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rlhmw" podStartSLOduration=3.475635028 podStartE2EDuration="7.03920966s" podCreationTimestamp="2026-02-19 21:25:21 +0000 UTC" firstStartedPulling="2026-02-19 21:25:23.839428547 +0000 UTC m=+1554.467271597" lastFinishedPulling="2026-02-19 21:25:27.403003179 +0000 UTC m=+1558.030846229" observedRunningTime="2026-02-19 21:25:28.025081234 +0000 UTC m=+1558.652924284" watchObservedRunningTime="2026-02-19 21:25:28.03920966 +0000 UTC m=+1558.667052710" Feb 19 21:25:28 crc kubenswrapper[4886]: I0219 21:25:28.398485 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 21:25:28 crc kubenswrapper[4886]: I0219 21:25:28.615337 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6acc91d-1f4f-4ca8-826c-6b783b0155d2" path="/var/lib/kubelet/pods/c6acc91d-1f4f-4ca8-826c-6b783b0155d2/volumes" Feb 19 21:25:28 crc kubenswrapper[4886]: I0219 21:25:28.873512 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="9b78f5c0-b665-4723-bddd-e6cccd0fca87" containerName="rabbitmq" containerID="cri-o://6c0345984c9b5a46610664ed23a96388ea630e43827323f49b8151985812b03a" gracePeriod=604795 Feb 19 21:25:29 crc kubenswrapper[4886]: I0219 21:25:29.019292 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5","Type":"ContainerStarted","Data":"bc56a53bc6a3a14dc7a73568da643a44d2a4ed1d5f3e3f8c18648d0937ae17b0"} Feb 19 21:25:30 crc kubenswrapper[4886]: I0219 21:25:30.371306 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="c4270056-5929-46be-bced-090af7fb6761" containerName="rabbitmq" containerID="cri-o://5bb601b75281ac4e071bbe5905d68a6efdf7e789ef3990c61b11a02d4f933751" gracePeriod=604795 Feb 19 21:25:30 crc kubenswrapper[4886]: I0219 21:25:30.623690 4886 scope.go:117] "RemoveContainer" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" Feb 19 21:25:30 crc kubenswrapper[4886]: E0219 21:25:30.623947 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:25:32 crc kubenswrapper[4886]: I0219 21:25:32.369751 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rlhmw" Feb 19 21:25:32 crc kubenswrapper[4886]: I0219 21:25:32.370318 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rlhmw" Feb 19 21:25:32 crc kubenswrapper[4886]: I0219 21:25:32.435701 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rlhmw" Feb 19 21:25:33 crc kubenswrapper[4886]: I0219 21:25:33.155051 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rlhmw" Feb 19 21:25:33 crc kubenswrapper[4886]: I0219 21:25:33.244461 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rlhmw"] Feb 19 21:25:35 crc kubenswrapper[4886]: I0219 21:25:35.112553 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rlhmw" podUID="059c5dd0-de69-4d0b-843f-d8cc11c71d32" containerName="registry-server" containerID="cri-o://7f6316ac3e91d81d578eb8edb8979b82bbf6909c33ab45cb7360f3ed70e0f9da" gracePeriod=2 Feb 19 21:25:36 crc kubenswrapper[4886]: I0219 21:25:36.152903 4886 generic.go:334] "Generic (PLEG): container finished" podID="9b78f5c0-b665-4723-bddd-e6cccd0fca87" containerID="6c0345984c9b5a46610664ed23a96388ea630e43827323f49b8151985812b03a" exitCode=0 Feb 19 21:25:36 crc kubenswrapper[4886]: I0219 21:25:36.153503 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9b78f5c0-b665-4723-bddd-e6cccd0fca87","Type":"ContainerDied","Data":"6c0345984c9b5a46610664ed23a96388ea630e43827323f49b8151985812b03a"} Feb 19 21:25:36 crc kubenswrapper[4886]: I0219 21:25:36.156339 4886 generic.go:334] "Generic (PLEG): container finished" podID="059c5dd0-de69-4d0b-843f-d8cc11c71d32" containerID="7f6316ac3e91d81d578eb8edb8979b82bbf6909c33ab45cb7360f3ed70e0f9da" exitCode=0 Feb 19 21:25:36 crc kubenswrapper[4886]: I0219 21:25:36.156372 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlhmw" event={"ID":"059c5dd0-de69-4d0b-843f-d8cc11c71d32","Type":"ContainerDied","Data":"7f6316ac3e91d81d578eb8edb8979b82bbf6909c33ab45cb7360f3ed70e0f9da"} Feb 19 21:25:37 crc kubenswrapper[4886]: I0219 21:25:37.177742 4886 generic.go:334] "Generic (PLEG): container finished" podID="c4270056-5929-46be-bced-090af7fb6761" containerID="5bb601b75281ac4e071bbe5905d68a6efdf7e789ef3990c61b11a02d4f933751" exitCode=0 Feb 19 21:25:37 crc kubenswrapper[4886]: I0219 21:25:37.178057 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c4270056-5929-46be-bced-090af7fb6761","Type":"ContainerDied","Data":"5bb601b75281ac4e071bbe5905d68a6efdf7e789ef3990c61b11a02d4f933751"} Feb 19 21:25:38 crc kubenswrapper[4886]: I0219 21:25:38.338976 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="c4270056-5929-46be-bced-090af7fb6761" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Feb 19 21:25:38 crc kubenswrapper[4886]: I0219 21:25:38.910994 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 19 21:25:38 crc kubenswrapper[4886]: I0219 21:25:38.920725 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlhmw" Feb 19 21:25:38 crc kubenswrapper[4886]: I0219 21:25:38.993615 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-365d36af-2f4e-4fca-b720-deca58835685\") pod \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " Feb 19 21:25:38 crc kubenswrapper[4886]: I0219 21:25:38.993715 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b78f5c0-b665-4723-bddd-e6cccd0fca87-rabbitmq-erlang-cookie\") pod \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " Feb 19 21:25:38 crc kubenswrapper[4886]: I0219 21:25:38.993782 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b78f5c0-b665-4723-bddd-e6cccd0fca87-config-data\") pod \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " Feb 19 21:25:38 crc kubenswrapper[4886]: I0219 21:25:38.993867 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059c5dd0-de69-4d0b-843f-d8cc11c71d32-catalog-content\") pod \"059c5dd0-de69-4d0b-843f-d8cc11c71d32\" (UID: \"059c5dd0-de69-4d0b-843f-d8cc11c71d32\") " Feb 19 21:25:38 crc kubenswrapper[4886]: I0219 21:25:38.993893 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b78f5c0-b665-4723-bddd-e6cccd0fca87-rabbitmq-tls\") pod \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " Feb 19 21:25:38 crc kubenswrapper[4886]: I0219 21:25:38.993982 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b78f5c0-b665-4723-bddd-e6cccd0fca87-plugins-conf\") pod \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " Feb 19 21:25:38 crc kubenswrapper[4886]: I0219 21:25:38.994089 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b78f5c0-b665-4723-bddd-e6cccd0fca87-server-conf\") pod \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " Feb 19 21:25:38 crc kubenswrapper[4886]: I0219 21:25:38.996517 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b78f5c0-b665-4723-bddd-e6cccd0fca87-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9b78f5c0-b665-4723-bddd-e6cccd0fca87" (UID: "9b78f5c0-b665-4723-bddd-e6cccd0fca87"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:25:38 crc kubenswrapper[4886]: I0219 21:25:38.997403 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059c5dd0-de69-4d0b-843f-d8cc11c71d32-utilities\") pod \"059c5dd0-de69-4d0b-843f-d8cc11c71d32\" (UID: \"059c5dd0-de69-4d0b-843f-d8cc11c71d32\") " Feb 19 21:25:38 crc kubenswrapper[4886]: I0219 21:25:38.997476 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b78f5c0-b665-4723-bddd-e6cccd0fca87-pod-info\") pod \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " Feb 19 21:25:38 crc kubenswrapper[4886]: I0219 21:25:38.997621 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b78f5c0-b665-4723-bddd-e6cccd0fca87-rabbitmq-confd\") pod \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " Feb 19 21:25:38 crc kubenswrapper[4886]: I0219 21:25:38.997721 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b78f5c0-b665-4723-bddd-e6cccd0fca87-rabbitmq-plugins\") pod \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " Feb 19 21:25:38 crc kubenswrapper[4886]: I0219 21:25:38.997763 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b78f5c0-b665-4723-bddd-e6cccd0fca87-erlang-cookie-secret\") pod \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " Feb 19 21:25:38 crc kubenswrapper[4886]: I0219 21:25:38.997804 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrh5p\" (UniqueName: \"kubernetes.io/projected/9b78f5c0-b665-4723-bddd-e6cccd0fca87-kube-api-access-wrh5p\") pod \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\" (UID: \"9b78f5c0-b665-4723-bddd-e6cccd0fca87\") " Feb 19 21:25:38 crc kubenswrapper[4886]: I0219 21:25:38.998441 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/059c5dd0-de69-4d0b-843f-d8cc11c71d32-utilities" (OuterVolumeSpecName: "utilities") pod "059c5dd0-de69-4d0b-843f-d8cc11c71d32" (UID: "059c5dd0-de69-4d0b-843f-d8cc11c71d32"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:25:38 crc kubenswrapper[4886]: I0219 21:25:38.998632 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b78f5c0-b665-4723-bddd-e6cccd0fca87-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9b78f5c0-b665-4723-bddd-e6cccd0fca87" (UID: "9b78f5c0-b665-4723-bddd-e6cccd0fca87"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:25:38 crc kubenswrapper[4886]: I0219 21:25:38.998820 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b78f5c0-b665-4723-bddd-e6cccd0fca87-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9b78f5c0-b665-4723-bddd-e6cccd0fca87" (UID: "9b78f5c0-b665-4723-bddd-e6cccd0fca87"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.002966 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v8c9\" (UniqueName: \"kubernetes.io/projected/059c5dd0-de69-4d0b-843f-d8cc11c71d32-kube-api-access-5v8c9\") pod \"059c5dd0-de69-4d0b-843f-d8cc11c71d32\" (UID: \"059c5dd0-de69-4d0b-843f-d8cc11c71d32\") " Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.004811 4886 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b78f5c0-b665-4723-bddd-e6cccd0fca87-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.004842 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059c5dd0-de69-4d0b-843f-d8cc11c71d32-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.004855 4886 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b78f5c0-b665-4723-bddd-e6cccd0fca87-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.004869 4886 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b78f5c0-b665-4723-bddd-e6cccd0fca87-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.006691 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9b78f5c0-b665-4723-bddd-e6cccd0fca87-pod-info" (OuterVolumeSpecName: "pod-info") pod "9b78f5c0-b665-4723-bddd-e6cccd0fca87" (UID: "9b78f5c0-b665-4723-bddd-e6cccd0fca87"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.020814 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b78f5c0-b665-4723-bddd-e6cccd0fca87-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9b78f5c0-b665-4723-bddd-e6cccd0fca87" (UID: "9b78f5c0-b665-4723-bddd-e6cccd0fca87"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.027472 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/059c5dd0-de69-4d0b-843f-d8cc11c71d32-kube-api-access-5v8c9" (OuterVolumeSpecName: "kube-api-access-5v8c9") pod "059c5dd0-de69-4d0b-843f-d8cc11c71d32" (UID: "059c5dd0-de69-4d0b-843f-d8cc11c71d32"). InnerVolumeSpecName "kube-api-access-5v8c9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.028310 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b78f5c0-b665-4723-bddd-e6cccd0fca87-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9b78f5c0-b665-4723-bddd-e6cccd0fca87" (UID: "9b78f5c0-b665-4723-bddd-e6cccd0fca87"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.029790 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b78f5c0-b665-4723-bddd-e6cccd0fca87-kube-api-access-wrh5p" (OuterVolumeSpecName: "kube-api-access-wrh5p") pod "9b78f5c0-b665-4723-bddd-e6cccd0fca87" (UID: "9b78f5c0-b665-4723-bddd-e6cccd0fca87"). InnerVolumeSpecName "kube-api-access-wrh5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.061635 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-365d36af-2f4e-4fca-b720-deca58835685" (OuterVolumeSpecName: "persistence") pod "9b78f5c0-b665-4723-bddd-e6cccd0fca87" (UID: "9b78f5c0-b665-4723-bddd-e6cccd0fca87"). InnerVolumeSpecName "pvc-365d36af-2f4e-4fca-b720-deca58835685". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.088040 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b78f5c0-b665-4723-bddd-e6cccd0fca87-config-data" (OuterVolumeSpecName: "config-data") pod "9b78f5c0-b665-4723-bddd-e6cccd0fca87" (UID: "9b78f5c0-b665-4723-bddd-e6cccd0fca87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.091991 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/059c5dd0-de69-4d0b-843f-d8cc11c71d32-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "059c5dd0-de69-4d0b-843f-d8cc11c71d32" (UID: "059c5dd0-de69-4d0b-843f-d8cc11c71d32"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.101443 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b78f5c0-b665-4723-bddd-e6cccd0fca87-server-conf" (OuterVolumeSpecName: "server-conf") pod "9b78f5c0-b665-4723-bddd-e6cccd0fca87" (UID: "9b78f5c0-b665-4723-bddd-e6cccd0fca87"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.107509 4886 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b78f5c0-b665-4723-bddd-e6cccd0fca87-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.107538 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrh5p\" (UniqueName: \"kubernetes.io/projected/9b78f5c0-b665-4723-bddd-e6cccd0fca87-kube-api-access-wrh5p\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.107550 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v8c9\" (UniqueName: \"kubernetes.io/projected/059c5dd0-de69-4d0b-843f-d8cc11c71d32-kube-api-access-5v8c9\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.107580 4886 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-365d36af-2f4e-4fca-b720-deca58835685\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-365d36af-2f4e-4fca-b720-deca58835685\") on node \"crc\" " Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.107593 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b78f5c0-b665-4723-bddd-e6cccd0fca87-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.107602 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059c5dd0-de69-4d0b-843f-d8cc11c71d32-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.107611 4886 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b78f5c0-b665-4723-bddd-e6cccd0fca87-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.107618 4886 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b78f5c0-b665-4723-bddd-e6cccd0fca87-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.107625 4886 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b78f5c0-b665-4723-bddd-e6cccd0fca87-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.217517 4886 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.218078 4886 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-365d36af-2f4e-4fca-b720-deca58835685" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-365d36af-2f4e-4fca-b720-deca58835685") on node "crc" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.226250 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b78f5c0-b665-4723-bddd-e6cccd0fca87-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9b78f5c0-b665-4723-bddd-e6cccd0fca87" (UID: "9b78f5c0-b665-4723-bddd-e6cccd0fca87"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.241890 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlhmw" event={"ID":"059c5dd0-de69-4d0b-843f-d8cc11c71d32","Type":"ContainerDied","Data":"e52a9512bd11838d27046450dd6cc7093caa95cf4cc983b87504fe81dac59a2c"} Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.241921 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlhmw" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.241952 4886 scope.go:117] "RemoveContainer" containerID="7f6316ac3e91d81d578eb8edb8979b82bbf6909c33ab45cb7360f3ed70e0f9da" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.251689 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9b78f5c0-b665-4723-bddd-e6cccd0fca87","Type":"ContainerDied","Data":"6d09db555acff88a4760fe63c41ebd935387675028bf0726bec583949002e087"} Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.251798 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.309584 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rlhmw"] Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.316900 4886 reconciler_common.go:293] "Volume detached for volume \"pvc-365d36af-2f4e-4fca-b720-deca58835685\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-365d36af-2f4e-4fca-b720-deca58835685\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.316939 4886 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b78f5c0-b665-4723-bddd-e6cccd0fca87-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.368472 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rlhmw"] Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.383671 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.400562 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.436410 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 19 21:25:39 crc kubenswrapper[4886]: E0219 21:25:39.436941 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="059c5dd0-de69-4d0b-843f-d8cc11c71d32" containerName="registry-server" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.436961 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="059c5dd0-de69-4d0b-843f-d8cc11c71d32" containerName="registry-server" Feb 19 21:25:39 crc kubenswrapper[4886]: E0219 21:25:39.436974 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="059c5dd0-de69-4d0b-843f-d8cc11c71d32" containerName="extract-content" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.436980 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="059c5dd0-de69-4d0b-843f-d8cc11c71d32" containerName="extract-content" Feb 19 21:25:39 crc kubenswrapper[4886]: E0219 21:25:39.436994 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b78f5c0-b665-4723-bddd-e6cccd0fca87" containerName="setup-container" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.437000 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b78f5c0-b665-4723-bddd-e6cccd0fca87" containerName="setup-container" Feb 19 21:25:39 crc kubenswrapper[4886]: E0219 21:25:39.437020 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b78f5c0-b665-4723-bddd-e6cccd0fca87" containerName="rabbitmq" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.437026 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b78f5c0-b665-4723-bddd-e6cccd0fca87" containerName="rabbitmq" Feb 19 21:25:39 crc kubenswrapper[4886]: E0219 21:25:39.437041 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="059c5dd0-de69-4d0b-843f-d8cc11c71d32" containerName="extract-utilities" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.437048 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="059c5dd0-de69-4d0b-843f-d8cc11c71d32" containerName="extract-utilities" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.437321 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="059c5dd0-de69-4d0b-843f-d8cc11c71d32" containerName="registry-server" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.437348 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b78f5c0-b665-4723-bddd-e6cccd0fca87" containerName="rabbitmq" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.438808 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.453189 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.521477 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/506a0730-319d-41d0-bedb-e27054002d43-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.521553 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/506a0730-319d-41d0-bedb-e27054002d43-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.521630 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/506a0730-319d-41d0-bedb-e27054002d43-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.521672 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/506a0730-319d-41d0-bedb-e27054002d43-server-conf\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.521783 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/506a0730-319d-41d0-bedb-e27054002d43-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.521856 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/506a0730-319d-41d0-bedb-e27054002d43-config-data\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.521981 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/506a0730-319d-41d0-bedb-e27054002d43-pod-info\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.522028 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-365d36af-2f4e-4fca-b720-deca58835685\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-365d36af-2f4e-4fca-b720-deca58835685\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.522065 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/506a0730-319d-41d0-bedb-e27054002d43-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.522134 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/506a0730-319d-41d0-bedb-e27054002d43-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.522225 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jszct\" (UniqueName: \"kubernetes.io/projected/506a0730-319d-41d0-bedb-e27054002d43-kube-api-access-jszct\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.624443 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/506a0730-319d-41d0-bedb-e27054002d43-pod-info\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.624501 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-365d36af-2f4e-4fca-b720-deca58835685\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-365d36af-2f4e-4fca-b720-deca58835685\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.624554 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/506a0730-319d-41d0-bedb-e27054002d43-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.624579 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/506a0730-319d-41d0-bedb-e27054002d43-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.624694 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jszct\" (UniqueName: \"kubernetes.io/projected/506a0730-319d-41d0-bedb-e27054002d43-kube-api-access-jszct\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.624736 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/506a0730-319d-41d0-bedb-e27054002d43-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.624787 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/506a0730-319d-41d0-bedb-e27054002d43-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.624809 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/506a0730-319d-41d0-bedb-e27054002d43-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.624858 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/506a0730-319d-41d0-bedb-e27054002d43-server-conf\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.624937 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/506a0730-319d-41d0-bedb-e27054002d43-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.625012 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/506a0730-319d-41d0-bedb-e27054002d43-config-data\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.626516 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/506a0730-319d-41d0-bedb-e27054002d43-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.626650 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/506a0730-319d-41d0-bedb-e27054002d43-server-conf\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.627110 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/506a0730-319d-41d0-bedb-e27054002d43-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.627190 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/506a0730-319d-41d0-bedb-e27054002d43-config-data\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.627568 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/506a0730-319d-41d0-bedb-e27054002d43-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.627751 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.627786 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-365d36af-2f4e-4fca-b720-deca58835685\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-365d36af-2f4e-4fca-b720-deca58835685\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7e5ae1e34e02febc8536f339c4f3cd95210d92fb100b4ae6e8cc0017ada2a80f/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.629059 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/506a0730-319d-41d0-bedb-e27054002d43-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.629999 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/506a0730-319d-41d0-bedb-e27054002d43-pod-info\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.630121 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/506a0730-319d-41d0-bedb-e27054002d43-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.630347 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/506a0730-319d-41d0-bedb-e27054002d43-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.642954 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jszct\" (UniqueName: \"kubernetes.io/projected/506a0730-319d-41d0-bedb-e27054002d43-kube-api-access-jszct\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.695429 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-365d36af-2f4e-4fca-b720-deca58835685\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-365d36af-2f4e-4fca-b720-deca58835685\") pod \"rabbitmq-server-2\" (UID: \"506a0730-319d-41d0-bedb-e27054002d43\") " pod="openstack/rabbitmq-server-2" Feb 19 21:25:39 crc kubenswrapper[4886]: I0219 21:25:39.766716 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 19 21:25:40 crc kubenswrapper[4886]: I0219 21:25:40.615880 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="059c5dd0-de69-4d0b-843f-d8cc11c71d32" path="/var/lib/kubelet/pods/059c5dd0-de69-4d0b-843f-d8cc11c71d32/volumes" Feb 19 21:25:40 crc kubenswrapper[4886]: I0219 21:25:40.617370 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b78f5c0-b665-4723-bddd-e6cccd0fca87" path="/var/lib/kubelet/pods/9b78f5c0-b665-4723-bddd-e6cccd0fca87/volumes" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.443770 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-jkkrt"] Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.446480 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.448805 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.478100 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-jkkrt"] Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.509594 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-config\") pod \"dnsmasq-dns-7d84b4d45c-jkkrt\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.509650 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-jkkrt\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.509777 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzrgq\" (UniqueName: \"kubernetes.io/projected/2a3387db-6dea-4c18-8d1c-4265976ae53a-kube-api-access-hzrgq\") pod \"dnsmasq-dns-7d84b4d45c-jkkrt\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.509823 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-jkkrt\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.509953 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-jkkrt\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.510037 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-jkkrt\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.510200 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-jkkrt\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.518630 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-jkkrt"] Feb 19 21:25:42 crc kubenswrapper[4886]: E0219 21:25:42.519755 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-hzrgq openstack-edpm-ipam ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" podUID="2a3387db-6dea-4c18-8d1c-4265976ae53a" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.547829 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-g8cgw"] Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.550063 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.582412 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-g8cgw"] Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.612764 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d779acb-5511-4157-880f-7d114fd1f700-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-g8cgw\" (UID: \"5d779acb-5511-4157-880f-7d114fd1f700\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.612818 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d779acb-5511-4157-880f-7d114fd1f700-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-g8cgw\" (UID: \"5d779acb-5511-4157-880f-7d114fd1f700\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.612842 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d779acb-5511-4157-880f-7d114fd1f700-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-g8cgw\" (UID: \"5d779acb-5511-4157-880f-7d114fd1f700\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.612866 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-config\") pod \"dnsmasq-dns-7d84b4d45c-jkkrt\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.612887 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-jkkrt\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.612947 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzrgq\" (UniqueName: \"kubernetes.io/projected/2a3387db-6dea-4c18-8d1c-4265976ae53a-kube-api-access-hzrgq\") pod \"dnsmasq-dns-7d84b4d45c-jkkrt\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.612972 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-jkkrt\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.613001 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d779acb-5511-4157-880f-7d114fd1f700-config\") pod \"dnsmasq-dns-6f6df4f56c-g8cgw\" (UID: \"5d779acb-5511-4157-880f-7d114fd1f700\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.613051 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5d779acb-5511-4157-880f-7d114fd1f700-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-g8cgw\" (UID: \"5d779acb-5511-4157-880f-7d114fd1f700\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.613069 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-jkkrt\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.613106 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-jkkrt\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.613215 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-jkkrt\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.613326 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9gqj\" (UniqueName: \"kubernetes.io/projected/5d779acb-5511-4157-880f-7d114fd1f700-kube-api-access-p9gqj\") pod \"dnsmasq-dns-6f6df4f56c-g8cgw\" (UID: \"5d779acb-5511-4157-880f-7d114fd1f700\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.613352 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d779acb-5511-4157-880f-7d114fd1f700-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-g8cgw\" (UID: \"5d779acb-5511-4157-880f-7d114fd1f700\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.614228 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-config\") pod \"dnsmasq-dns-7d84b4d45c-jkkrt\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.614760 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-jkkrt\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.616498 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-jkkrt\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.616543 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-jkkrt\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.617004 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-jkkrt\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.626186 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-jkkrt\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.647120 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzrgq\" (UniqueName: \"kubernetes.io/projected/2a3387db-6dea-4c18-8d1c-4265976ae53a-kube-api-access-hzrgq\") pod \"dnsmasq-dns-7d84b4d45c-jkkrt\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.724804 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d779acb-5511-4157-880f-7d114fd1f700-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-g8cgw\" (UID: \"5d779acb-5511-4157-880f-7d114fd1f700\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.724873 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d779acb-5511-4157-880f-7d114fd1f700-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-g8cgw\" (UID: \"5d779acb-5511-4157-880f-7d114fd1f700\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.724892 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d779acb-5511-4157-880f-7d114fd1f700-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-g8cgw\" (UID: \"5d779acb-5511-4157-880f-7d114fd1f700\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.724965 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d779acb-5511-4157-880f-7d114fd1f700-config\") pod \"dnsmasq-dns-6f6df4f56c-g8cgw\" (UID: \"5d779acb-5511-4157-880f-7d114fd1f700\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.725014 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5d779acb-5511-4157-880f-7d114fd1f700-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-g8cgw\" (UID: \"5d779acb-5511-4157-880f-7d114fd1f700\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.725165 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9gqj\" (UniqueName: \"kubernetes.io/projected/5d779acb-5511-4157-880f-7d114fd1f700-kube-api-access-p9gqj\") pod \"dnsmasq-dns-6f6df4f56c-g8cgw\" (UID: \"5d779acb-5511-4157-880f-7d114fd1f700\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.725187 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d779acb-5511-4157-880f-7d114fd1f700-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-g8cgw\" (UID: \"5d779acb-5511-4157-880f-7d114fd1f700\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.726117 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d779acb-5511-4157-880f-7d114fd1f700-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-g8cgw\" (UID: \"5d779acb-5511-4157-880f-7d114fd1f700\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.726870 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d779acb-5511-4157-880f-7d114fd1f700-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-g8cgw\" (UID: \"5d779acb-5511-4157-880f-7d114fd1f700\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.726972 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5d779acb-5511-4157-880f-7d114fd1f700-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-g8cgw\" (UID: \"5d779acb-5511-4157-880f-7d114fd1f700\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.729747 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d779acb-5511-4157-880f-7d114fd1f700-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-g8cgw\" (UID: \"5d779acb-5511-4157-880f-7d114fd1f700\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.730977 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d779acb-5511-4157-880f-7d114fd1f700-config\") pod \"dnsmasq-dns-6f6df4f56c-g8cgw\" (UID: \"5d779acb-5511-4157-880f-7d114fd1f700\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.731057 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d779acb-5511-4157-880f-7d114fd1f700-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-g8cgw\" (UID: \"5d779acb-5511-4157-880f-7d114fd1f700\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.762187 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9gqj\" (UniqueName: \"kubernetes.io/projected/5d779acb-5511-4157-880f-7d114fd1f700-kube-api-access-p9gqj\") pod \"dnsmasq-dns-6f6df4f56c-g8cgw\" (UID: \"5d779acb-5511-4157-880f-7d114fd1f700\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" Feb 19 21:25:42 crc kubenswrapper[4886]: I0219 21:25:42.874693 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" Feb 19 21:25:43 crc kubenswrapper[4886]: I0219 21:25:43.297552 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="9b78f5c0-b665-4723-bddd-e6cccd0fca87" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: i/o timeout" Feb 19 21:25:43 crc kubenswrapper[4886]: I0219 21:25:43.302759 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" Feb 19 21:25:43 crc kubenswrapper[4886]: I0219 21:25:43.330850 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" Feb 19 21:25:43 crc kubenswrapper[4886]: I0219 21:25:43.441520 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-dns-swift-storage-0\") pod \"2a3387db-6dea-4c18-8d1c-4265976ae53a\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " Feb 19 21:25:43 crc kubenswrapper[4886]: I0219 21:25:43.441600 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-dns-svc\") pod \"2a3387db-6dea-4c18-8d1c-4265976ae53a\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " Feb 19 21:25:43 crc kubenswrapper[4886]: I0219 21:25:43.441820 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzrgq\" (UniqueName: \"kubernetes.io/projected/2a3387db-6dea-4c18-8d1c-4265976ae53a-kube-api-access-hzrgq\") pod \"2a3387db-6dea-4c18-8d1c-4265976ae53a\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " Feb 19 21:25:43 crc kubenswrapper[4886]: I0219 21:25:43.441874 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-openstack-edpm-ipam\") pod \"2a3387db-6dea-4c18-8d1c-4265976ae53a\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " Feb 19 21:25:43 crc kubenswrapper[4886]: I0219 21:25:43.442045 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2a3387db-6dea-4c18-8d1c-4265976ae53a" (UID: "2a3387db-6dea-4c18-8d1c-4265976ae53a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:25:43 crc kubenswrapper[4886]: I0219 21:25:43.442056 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-ovsdbserver-sb\") pod \"2a3387db-6dea-4c18-8d1c-4265976ae53a\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " Feb 19 21:25:43 crc kubenswrapper[4886]: I0219 21:25:43.442089 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-ovsdbserver-nb\") pod \"2a3387db-6dea-4c18-8d1c-4265976ae53a\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " Feb 19 21:25:43 crc kubenswrapper[4886]: I0219 21:25:43.442121 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-config\") pod \"2a3387db-6dea-4c18-8d1c-4265976ae53a\" (UID: \"2a3387db-6dea-4c18-8d1c-4265976ae53a\") " Feb 19 21:25:43 crc kubenswrapper[4886]: I0219 21:25:43.442126 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2a3387db-6dea-4c18-8d1c-4265976ae53a" (UID: "2a3387db-6dea-4c18-8d1c-4265976ae53a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:25:43 crc kubenswrapper[4886]: I0219 21:25:43.442393 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "2a3387db-6dea-4c18-8d1c-4265976ae53a" (UID: "2a3387db-6dea-4c18-8d1c-4265976ae53a"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:25:43 crc kubenswrapper[4886]: I0219 21:25:43.442818 4886 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:43 crc kubenswrapper[4886]: I0219 21:25:43.442843 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:43 crc kubenswrapper[4886]: I0219 21:25:43.442863 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:43 crc kubenswrapper[4886]: I0219 21:25:43.443080 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2a3387db-6dea-4c18-8d1c-4265976ae53a" (UID: "2a3387db-6dea-4c18-8d1c-4265976ae53a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:25:43 crc kubenswrapper[4886]: I0219 21:25:43.443124 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2a3387db-6dea-4c18-8d1c-4265976ae53a" (UID: "2a3387db-6dea-4c18-8d1c-4265976ae53a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:25:43 crc kubenswrapper[4886]: I0219 21:25:43.443412 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-config" (OuterVolumeSpecName: "config") pod "2a3387db-6dea-4c18-8d1c-4265976ae53a" (UID: "2a3387db-6dea-4c18-8d1c-4265976ae53a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:25:43 crc kubenswrapper[4886]: I0219 21:25:43.447028 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a3387db-6dea-4c18-8d1c-4265976ae53a-kube-api-access-hzrgq" (OuterVolumeSpecName: "kube-api-access-hzrgq") pod "2a3387db-6dea-4c18-8d1c-4265976ae53a" (UID: "2a3387db-6dea-4c18-8d1c-4265976ae53a"). InnerVolumeSpecName "kube-api-access-hzrgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:25:43 crc kubenswrapper[4886]: I0219 21:25:43.544881 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzrgq\" (UniqueName: \"kubernetes.io/projected/2a3387db-6dea-4c18-8d1c-4265976ae53a-kube-api-access-hzrgq\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:43 crc kubenswrapper[4886]: I0219 21:25:43.544914 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:43 crc kubenswrapper[4886]: I0219 21:25:43.544928 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:43 crc kubenswrapper[4886]: I0219 21:25:43.544939 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a3387db-6dea-4c18-8d1c-4265976ae53a-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:44 crc kubenswrapper[4886]: I0219 21:25:44.314951 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-jkkrt" Feb 19 21:25:44 crc kubenswrapper[4886]: I0219 21:25:44.438286 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-jkkrt"] Feb 19 21:25:44 crc kubenswrapper[4886]: I0219 21:25:44.464621 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-jkkrt"] Feb 19 21:25:44 crc kubenswrapper[4886]: I0219 21:25:44.603166 4886 scope.go:117] "RemoveContainer" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" Feb 19 21:25:44 crc kubenswrapper[4886]: E0219 21:25:44.603525 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:25:44 crc kubenswrapper[4886]: I0219 21:25:44.620993 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a3387db-6dea-4c18-8d1c-4265976ae53a" path="/var/lib/kubelet/pods/2a3387db-6dea-4c18-8d1c-4265976ae53a/volumes" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.107888 4886 scope.go:117] "RemoveContainer" containerID="d5c5a70c3534360aad0a2a686fa17bc6d8762df8144c048cf3ad632725a85086" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.213909 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.252641 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c4270056-5929-46be-bced-090af7fb6761-rabbitmq-tls\") pod \"c4270056-5929-46be-bced-090af7fb6761\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.256019 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8965c421-86d6-4ed1-97a8-9d5923d37c1b\") pod \"c4270056-5929-46be-bced-090af7fb6761\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.256103 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c4270056-5929-46be-bced-090af7fb6761-server-conf\") pod \"c4270056-5929-46be-bced-090af7fb6761\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.256166 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c4270056-5929-46be-bced-090af7fb6761-rabbitmq-erlang-cookie\") pod \"c4270056-5929-46be-bced-090af7fb6761\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.256190 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c4270056-5929-46be-bced-090af7fb6761-plugins-conf\") pod \"c4270056-5929-46be-bced-090af7fb6761\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.256228 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c4270056-5929-46be-bced-090af7fb6761-rabbitmq-plugins\") pod \"c4270056-5929-46be-bced-090af7fb6761\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.256304 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bm9b\" (UniqueName: \"kubernetes.io/projected/c4270056-5929-46be-bced-090af7fb6761-kube-api-access-2bm9b\") pod \"c4270056-5929-46be-bced-090af7fb6761\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.256399 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c4270056-5929-46be-bced-090af7fb6761-rabbitmq-confd\") pod \"c4270056-5929-46be-bced-090af7fb6761\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.256428 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c4270056-5929-46be-bced-090af7fb6761-erlang-cookie-secret\") pod \"c4270056-5929-46be-bced-090af7fb6761\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.256479 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4270056-5929-46be-bced-090af7fb6761-config-data\") pod \"c4270056-5929-46be-bced-090af7fb6761\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.256536 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c4270056-5929-46be-bced-090af7fb6761-pod-info\") pod \"c4270056-5929-46be-bced-090af7fb6761\" (UID: \"c4270056-5929-46be-bced-090af7fb6761\") " Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.256866 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4270056-5929-46be-bced-090af7fb6761-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "c4270056-5929-46be-bced-090af7fb6761" (UID: "c4270056-5929-46be-bced-090af7fb6761"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.257243 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4270056-5929-46be-bced-090af7fb6761-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "c4270056-5929-46be-bced-090af7fb6761" (UID: "c4270056-5929-46be-bced-090af7fb6761"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.257496 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4270056-5929-46be-bced-090af7fb6761-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "c4270056-5929-46be-bced-090af7fb6761" (UID: "c4270056-5929-46be-bced-090af7fb6761"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.257663 4886 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c4270056-5929-46be-bced-090af7fb6761-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.257676 4886 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c4270056-5929-46be-bced-090af7fb6761-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.257686 4886 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c4270056-5929-46be-bced-090af7fb6761-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.291931 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4270056-5929-46be-bced-090af7fb6761-kube-api-access-2bm9b" (OuterVolumeSpecName: "kube-api-access-2bm9b") pod "c4270056-5929-46be-bced-090af7fb6761" (UID: "c4270056-5929-46be-bced-090af7fb6761"). InnerVolumeSpecName "kube-api-access-2bm9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.292748 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/c4270056-5929-46be-bced-090af7fb6761-pod-info" (OuterVolumeSpecName: "pod-info") pod "c4270056-5929-46be-bced-090af7fb6761" (UID: "c4270056-5929-46be-bced-090af7fb6761"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.302889 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4270056-5929-46be-bced-090af7fb6761-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "c4270056-5929-46be-bced-090af7fb6761" (UID: "c4270056-5929-46be-bced-090af7fb6761"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.304487 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4270056-5929-46be-bced-090af7fb6761-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "c4270056-5929-46be-bced-090af7fb6761" (UID: "c4270056-5929-46be-bced-090af7fb6761"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.323637 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8965c421-86d6-4ed1-97a8-9d5923d37c1b" (OuterVolumeSpecName: "persistence") pod "c4270056-5929-46be-bced-090af7fb6761" (UID: "c4270056-5929-46be-bced-090af7fb6761"). InnerVolumeSpecName "pvc-8965c421-86d6-4ed1-97a8-9d5923d37c1b". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.343037 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4270056-5929-46be-bced-090af7fb6761-server-conf" (OuterVolumeSpecName: "server-conf") pod "c4270056-5929-46be-bced-090af7fb6761" (UID: "c4270056-5929-46be-bced-090af7fb6761"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.352052 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.353133 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c4270056-5929-46be-bced-090af7fb6761","Type":"ContainerDied","Data":"6e95c8f6b26d0884009e67178d07c44f76d83fd2e191b561caafd75a0e18839a"} Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.365087 4886 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c4270056-5929-46be-bced-090af7fb6761-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.365115 4886 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c4270056-5929-46be-bced-090af7fb6761-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.365125 4886 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c4270056-5929-46be-bced-090af7fb6761-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.365154 4886 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8965c421-86d6-4ed1-97a8-9d5923d37c1b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8965c421-86d6-4ed1-97a8-9d5923d37c1b\") on node \"crc\" " Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.365168 4886 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c4270056-5929-46be-bced-090af7fb6761-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.365180 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bm9b\" (UniqueName: \"kubernetes.io/projected/c4270056-5929-46be-bced-090af7fb6761-kube-api-access-2bm9b\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.410519 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4270056-5929-46be-bced-090af7fb6761-config-data" (OuterVolumeSpecName: "config-data") pod "c4270056-5929-46be-bced-090af7fb6761" (UID: "c4270056-5929-46be-bced-090af7fb6761"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.431565 4886 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.431780 4886 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8965c421-86d6-4ed1-97a8-9d5923d37c1b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8965c421-86d6-4ed1-97a8-9d5923d37c1b") on node "crc" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.463602 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4270056-5929-46be-bced-090af7fb6761-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "c4270056-5929-46be-bced-090af7fb6761" (UID: "c4270056-5929-46be-bced-090af7fb6761"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.467783 4886 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c4270056-5929-46be-bced-090af7fb6761-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.467816 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4270056-5929-46be-bced-090af7fb6761-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.467829 4886 reconciler_common.go:293] "Volume detached for volume \"pvc-8965c421-86d6-4ed1-97a8-9d5923d37c1b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8965c421-86d6-4ed1-97a8-9d5923d37c1b\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.687306 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.715484 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.736814 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 21:25:46 crc kubenswrapper[4886]: E0219 21:25:46.737414 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4270056-5929-46be-bced-090af7fb6761" containerName="setup-container" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.737433 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4270056-5929-46be-bced-090af7fb6761" containerName="setup-container" Feb 19 21:25:46 crc kubenswrapper[4886]: E0219 21:25:46.737467 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4270056-5929-46be-bced-090af7fb6761" containerName="rabbitmq" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.737478 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4270056-5929-46be-bced-090af7fb6761" containerName="rabbitmq" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.737707 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4270056-5929-46be-bced-090af7fb6761" containerName="rabbitmq" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.739008 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.742097 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.742328 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.742599 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bfnxg" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.743463 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.743481 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.743549 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.743775 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.749861 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.778141 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbwmj\" (UniqueName: \"kubernetes.io/projected/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-kube-api-access-rbwmj\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.778197 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.778251 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.778286 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.778337 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.778383 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.778433 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.778498 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8965c421-86d6-4ed1-97a8-9d5923d37c1b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8965c421-86d6-4ed1-97a8-9d5923d37c1b\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.778526 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.778542 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.778565 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.880809 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbwmj\" (UniqueName: \"kubernetes.io/projected/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-kube-api-access-rbwmj\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.880864 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.880916 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.880935 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.880979 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.881024 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.881072 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.881163 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8965c421-86d6-4ed1-97a8-9d5923d37c1b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8965c421-86d6-4ed1-97a8-9d5923d37c1b\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.881189 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.881204 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.881226 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.884290 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.884348 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8965c421-86d6-4ed1-97a8-9d5923d37c1b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8965c421-86d6-4ed1-97a8-9d5923d37c1b\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e23f42a41100c932cf91673bcb2dafbf5a7af374281addd12c20957e79bb03b2/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.884296 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.885254 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.885723 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.885786 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.886445 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.888834 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.889767 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.889842 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.892838 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.899459 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbwmj\" (UniqueName: \"kubernetes.io/projected/dfc18bff-58ba-4eec-9de7-bca4e9dd55e4-kube-api-access-rbwmj\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:46 crc kubenswrapper[4886]: I0219 21:25:46.940079 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8965c421-86d6-4ed1-97a8-9d5923d37c1b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8965c421-86d6-4ed1-97a8-9d5923d37c1b\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:47 crc kubenswrapper[4886]: I0219 21:25:47.079961 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:25:48 crc kubenswrapper[4886]: I0219 21:25:48.618484 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4270056-5929-46be-bced-090af7fb6761" path="/var/lib/kubelet/pods/c4270056-5929-46be-bced-090af7fb6761/volumes" Feb 19 21:25:48 crc kubenswrapper[4886]: I0219 21:25:48.644412 4886 scope.go:117] "RemoveContainer" containerID="5bb601b75281ac4e071bbe5905d68a6efdf7e789ef3990c61b11a02d4f933751" Feb 19 21:25:49 crc kubenswrapper[4886]: E0219 21:25:49.107471 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 19 21:25:49 crc kubenswrapper[4886]: E0219 21:25:49.107554 4886 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 19 21:25:49 crc kubenswrapper[4886]: E0219 21:25:49.107733 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vb8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-xx9l4_openstack(489598d7-0933-4271-92d3-26a24263c4bc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:25:49 crc kubenswrapper[4886]: E0219 21:25:49.109012 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-xx9l4" podUID="489598d7-0933-4271-92d3-26a24263c4bc" Feb 19 21:25:49 crc kubenswrapper[4886]: E0219 21:25:49.413504 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-xx9l4" podUID="489598d7-0933-4271-92d3-26a24263c4bc" Feb 19 21:25:49 crc kubenswrapper[4886]: E0219 21:25:49.546519 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 19 21:25:49 crc kubenswrapper[4886]: E0219 21:25:49.546584 4886 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 19 21:25:49 crc kubenswrapper[4886]: E0219 21:25:49.546742 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9fh59ch574hdbh5b8h99hd6h676h74h57bh5ddh687h94h646h57bh5cdh68fh5b9h84h59dh5b6h695hf4h68h58fh88h5d6h677h5fch667h675h89q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8857,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(1e98987f-2584-4d4c-ae5e-7fd6bdb947d5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 21:25:49 crc kubenswrapper[4886]: I0219 21:25:49.548485 4886 scope.go:117] "RemoveContainer" containerID="0ed02cdc35717f60c7fa03a8fb309c0b4840132776e739cb32902563078e900d" Feb 19 21:25:49 crc kubenswrapper[4886]: I0219 21:25:49.616662 4886 scope.go:117] "RemoveContainer" containerID="ce7d01c8edadb597a63c0ac26ed52d10e15d6cdbd55a9bc4ef1349b5d4fcbdd1" Feb 19 21:25:49 crc kubenswrapper[4886]: I0219 21:25:49.755482 4886 scope.go:117] "RemoveContainer" containerID="6c0345984c9b5a46610664ed23a96388ea630e43827323f49b8151985812b03a" Feb 19 21:25:49 crc kubenswrapper[4886]: I0219 21:25:49.833852 4886 scope.go:117] "RemoveContainer" containerID="4e8922dfb830e30a42ef115e0427951e4c6a6b6e81ca7dde9190231ea706411c" Feb 19 21:25:49 crc kubenswrapper[4886]: I0219 21:25:49.859896 4886 scope.go:117] "RemoveContainer" containerID="aca645c0e142a7b5a73ac3674dc61187f5c1702e57f8c676e126f2e579b59e7b" Feb 19 21:25:49 crc kubenswrapper[4886]: I0219 21:25:49.880166 4886 scope.go:117] "RemoveContainer" containerID="96d73c58b46d04c1c265dd1d99f23c8ac2de892994d55a0657ff7b540302976e" Feb 19 21:25:49 crc kubenswrapper[4886]: I0219 21:25:49.902253 4886 scope.go:117] "RemoveContainer" containerID="5bb601b75281ac4e071bbe5905d68a6efdf7e789ef3990c61b11a02d4f933751" Feb 19 21:25:49 crc kubenswrapper[4886]: E0219 21:25:49.902972 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bb601b75281ac4e071bbe5905d68a6efdf7e789ef3990c61b11a02d4f933751\": container with ID starting with 5bb601b75281ac4e071bbe5905d68a6efdf7e789ef3990c61b11a02d4f933751 not found: ID does not exist" containerID="5bb601b75281ac4e071bbe5905d68a6efdf7e789ef3990c61b11a02d4f933751" Feb 19 21:25:49 crc kubenswrapper[4886]: I0219 21:25:49.903012 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bb601b75281ac4e071bbe5905d68a6efdf7e789ef3990c61b11a02d4f933751"} err="failed to get container status \"5bb601b75281ac4e071bbe5905d68a6efdf7e789ef3990c61b11a02d4f933751\": rpc error: code = NotFound desc = could not find container \"5bb601b75281ac4e071bbe5905d68a6efdf7e789ef3990c61b11a02d4f933751\": container with ID starting with 5bb601b75281ac4e071bbe5905d68a6efdf7e789ef3990c61b11a02d4f933751 not found: ID does not exist" Feb 19 21:25:49 crc kubenswrapper[4886]: I0219 21:25:49.903325 4886 scope.go:117] "RemoveContainer" containerID="ce7d01c8edadb597a63c0ac26ed52d10e15d6cdbd55a9bc4ef1349b5d4fcbdd1" Feb 19 21:25:49 crc kubenswrapper[4886]: E0219 21:25:49.903783 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce7d01c8edadb597a63c0ac26ed52d10e15d6cdbd55a9bc4ef1349b5d4fcbdd1\": container with ID starting with ce7d01c8edadb597a63c0ac26ed52d10e15d6cdbd55a9bc4ef1349b5d4fcbdd1 not found: ID does not exist" containerID="ce7d01c8edadb597a63c0ac26ed52d10e15d6cdbd55a9bc4ef1349b5d4fcbdd1" Feb 19 21:25:49 crc kubenswrapper[4886]: I0219 21:25:49.903803 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce7d01c8edadb597a63c0ac26ed52d10e15d6cdbd55a9bc4ef1349b5d4fcbdd1"} err="failed to get container status \"ce7d01c8edadb597a63c0ac26ed52d10e15d6cdbd55a9bc4ef1349b5d4fcbdd1\": rpc error: code = NotFound desc = could not find container \"ce7d01c8edadb597a63c0ac26ed52d10e15d6cdbd55a9bc4ef1349b5d4fcbdd1\": container with ID starting with ce7d01c8edadb597a63c0ac26ed52d10e15d6cdbd55a9bc4ef1349b5d4fcbdd1 not found: ID does not exist" Feb 19 21:25:49 crc kubenswrapper[4886]: I0219 21:25:49.924195 4886 scope.go:117] "RemoveContainer" containerID="6c0345984c9b5a46610664ed23a96388ea630e43827323f49b8151985812b03a" Feb 19 21:25:49 crc kubenswrapper[4886]: E0219 21:25:49.925053 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c0345984c9b5a46610664ed23a96388ea630e43827323f49b8151985812b03a\": container with ID starting with 6c0345984c9b5a46610664ed23a96388ea630e43827323f49b8151985812b03a not found: ID does not exist" containerID="6c0345984c9b5a46610664ed23a96388ea630e43827323f49b8151985812b03a" Feb 19 21:25:49 crc kubenswrapper[4886]: E0219 21:25:49.925092 4886 kuberuntime_gc.go:150] "Failed to remove container" err="failed to get container status \"6c0345984c9b5a46610664ed23a96388ea630e43827323f49b8151985812b03a\": rpc error: code = NotFound desc = could not find container \"6c0345984c9b5a46610664ed23a96388ea630e43827323f49b8151985812b03a\": container with ID starting with 6c0345984c9b5a46610664ed23a96388ea630e43827323f49b8151985812b03a not found: ID does not exist" containerID="6c0345984c9b5a46610664ed23a96388ea630e43827323f49b8151985812b03a" Feb 19 21:25:49 crc kubenswrapper[4886]: I0219 21:25:49.925125 4886 scope.go:117] "RemoveContainer" containerID="aca645c0e142a7b5a73ac3674dc61187f5c1702e57f8c676e126f2e579b59e7b" Feb 19 21:25:49 crc kubenswrapper[4886]: E0219 21:25:49.925448 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aca645c0e142a7b5a73ac3674dc61187f5c1702e57f8c676e126f2e579b59e7b\": container with ID starting with aca645c0e142a7b5a73ac3674dc61187f5c1702e57f8c676e126f2e579b59e7b not found: ID does not exist" containerID="aca645c0e142a7b5a73ac3674dc61187f5c1702e57f8c676e126f2e579b59e7b" Feb 19 21:25:49 crc kubenswrapper[4886]: E0219 21:25:49.925483 4886 kuberuntime_gc.go:150] "Failed to remove container" err="failed to get container status \"aca645c0e142a7b5a73ac3674dc61187f5c1702e57f8c676e126f2e579b59e7b\": rpc error: code = NotFound desc = could not find container \"aca645c0e142a7b5a73ac3674dc61187f5c1702e57f8c676e126f2e579b59e7b\": container with ID starting with aca645c0e142a7b5a73ac3674dc61187f5c1702e57f8c676e126f2e579b59e7b not found: ID does not exist" containerID="aca645c0e142a7b5a73ac3674dc61187f5c1702e57f8c676e126f2e579b59e7b" Feb 19 21:25:50 crc kubenswrapper[4886]: I0219 21:25:50.199922 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 19 21:25:50 crc kubenswrapper[4886]: I0219 21:25:50.406753 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-g8cgw"] Feb 19 21:25:50 crc kubenswrapper[4886]: I0219 21:25:50.422542 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"506a0730-319d-41d0-bedb-e27054002d43","Type":"ContainerStarted","Data":"a2cedc5bbe0de884ccb8cfdb9e117ae7a22bfebbf8004cc9c99d5bd213fc1d18"} Feb 19 21:25:50 crc kubenswrapper[4886]: I0219 21:25:50.424137 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5","Type":"ContainerStarted","Data":"c40cce0f778c77d54d5a304bdc32d0fa90c79585a76b7d36585a773a98714cc9"} Feb 19 21:25:50 crc kubenswrapper[4886]: W0219 21:25:50.424856 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfc18bff_58ba_4eec_9de7_bca4e9dd55e4.slice/crio-537dd12d35c33a77bd1391b4e7ca9062b1e7f0a6af8739116c8f4e835af98969 WatchSource:0}: Error finding container 537dd12d35c33a77bd1391b4e7ca9062b1e7f0a6af8739116c8f4e835af98969: Status 404 returned error can't find the container with id 537dd12d35c33a77bd1391b4e7ca9062b1e7f0a6af8739116c8f4e835af98969 Feb 19 21:25:50 crc kubenswrapper[4886]: I0219 21:25:50.427363 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 21:25:51 crc kubenswrapper[4886]: I0219 21:25:51.437828 4886 generic.go:334] "Generic (PLEG): container finished" podID="5d779acb-5511-4157-880f-7d114fd1f700" containerID="e5c12d109555f1cd9e154b707bedd24d8af8a9f6903aba4bc9aee3d602bcdadb" exitCode=0 Feb 19 21:25:51 crc kubenswrapper[4886]: I0219 21:25:51.437946 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" event={"ID":"5d779acb-5511-4157-880f-7d114fd1f700","Type":"ContainerDied","Data":"e5c12d109555f1cd9e154b707bedd24d8af8a9f6903aba4bc9aee3d602bcdadb"} Feb 19 21:25:51 crc kubenswrapper[4886]: I0219 21:25:51.438282 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" event={"ID":"5d779acb-5511-4157-880f-7d114fd1f700","Type":"ContainerStarted","Data":"6a7b890e7706d8cff9cbe2f503f260e0ba62d33b34787c6bcc30af88890e1c20"} Feb 19 21:25:51 crc kubenswrapper[4886]: I0219 21:25:51.442387 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4","Type":"ContainerStarted","Data":"537dd12d35c33a77bd1391b4e7ca9062b1e7f0a6af8739116c8f4e835af98969"} Feb 19 21:25:52 crc kubenswrapper[4886]: I0219 21:25:52.459776 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4","Type":"ContainerStarted","Data":"67347b40847fca0f904fa26ecaf00d4373328a21a1a904ac26f3dafaf0d09d29"} Feb 19 21:25:52 crc kubenswrapper[4886]: I0219 21:25:52.461670 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"506a0730-319d-41d0-bedb-e27054002d43","Type":"ContainerStarted","Data":"2970fa6953df272aa1854aa93cba1ac52bc052a84e466cfe13c4115ee40d1838"} Feb 19 21:25:52 crc kubenswrapper[4886]: I0219 21:25:52.463756 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5","Type":"ContainerStarted","Data":"2eb8ec590b63a542f99c7ddc20da18def702c34f68c4b1fd0cf8b1c3b85d803a"} Feb 19 21:25:52 crc kubenswrapper[4886]: I0219 21:25:52.466011 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" event={"ID":"5d779acb-5511-4157-880f-7d114fd1f700","Type":"ContainerStarted","Data":"82fc45513417ff97334216cb5cfb286f42bc97d660a34c1bfd0c2ec5496f7007"} Feb 19 21:25:52 crc kubenswrapper[4886]: I0219 21:25:52.466226 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" Feb 19 21:25:52 crc kubenswrapper[4886]: I0219 21:25:52.515454 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" podStartSLOduration=10.515436157 podStartE2EDuration="10.515436157s" podCreationTimestamp="2026-02-19 21:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:25:52.508222631 +0000 UTC m=+1583.136065691" watchObservedRunningTime="2026-02-19 21:25:52.515436157 +0000 UTC m=+1583.143279217" Feb 19 21:25:55 crc kubenswrapper[4886]: E0219 21:25:55.101204 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="1e98987f-2584-4d4c-ae5e-7fd6bdb947d5" Feb 19 21:25:55 crc kubenswrapper[4886]: I0219 21:25:55.590577 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5","Type":"ContainerStarted","Data":"4bf784add56451a66f12947ef661aa0724742533515d956713a552d842ed8fea"} Feb 19 21:25:55 crc kubenswrapper[4886]: I0219 21:25:55.590871 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 21:25:55 crc kubenswrapper[4886]: E0219 21:25:55.593109 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1e98987f-2584-4d4c-ae5e-7fd6bdb947d5" Feb 19 21:25:56 crc kubenswrapper[4886]: E0219 21:25:56.609700 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1e98987f-2584-4d4c-ae5e-7fd6bdb947d5" Feb 19 21:25:57 crc kubenswrapper[4886]: I0219 21:25:57.622783 4886 scope.go:117] "RemoveContainer" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" Feb 19 21:25:57 crc kubenswrapper[4886]: E0219 21:25:57.624819 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:25:57 crc kubenswrapper[4886]: I0219 21:25:57.879494 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-g8cgw" Feb 19 21:25:57 crc kubenswrapper[4886]: I0219 21:25:57.988969 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-77vlz"] Feb 19 21:25:57 crc kubenswrapper[4886]: I0219 21:25:57.989205 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" podUID="2442d861-718b-433c-9d24-06d9565234c8" containerName="dnsmasq-dns" containerID="cri-o://a60577fd198a8e2f4abc54a65d8ee321433f9c7eda1eeee82d4b48c90e3bddac" gracePeriod=10 Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.526588 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.613669 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-ovsdbserver-sb\") pod \"2442d861-718b-433c-9d24-06d9565234c8\" (UID: \"2442d861-718b-433c-9d24-06d9565234c8\") " Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.613751 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd62c\" (UniqueName: \"kubernetes.io/projected/2442d861-718b-433c-9d24-06d9565234c8-kube-api-access-kd62c\") pod \"2442d861-718b-433c-9d24-06d9565234c8\" (UID: \"2442d861-718b-433c-9d24-06d9565234c8\") " Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.613917 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-dns-swift-storage-0\") pod \"2442d861-718b-433c-9d24-06d9565234c8\" (UID: \"2442d861-718b-433c-9d24-06d9565234c8\") " Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.614004 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-dns-svc\") pod \"2442d861-718b-433c-9d24-06d9565234c8\" (UID: \"2442d861-718b-433c-9d24-06d9565234c8\") " Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.614073 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-config\") pod \"2442d861-718b-433c-9d24-06d9565234c8\" (UID: \"2442d861-718b-433c-9d24-06d9565234c8\") " Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.614184 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-ovsdbserver-nb\") pod \"2442d861-718b-433c-9d24-06d9565234c8\" (UID: \"2442d861-718b-433c-9d24-06d9565234c8\") " Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.625659 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2442d861-718b-433c-9d24-06d9565234c8-kube-api-access-kd62c" (OuterVolumeSpecName: "kube-api-access-kd62c") pod "2442d861-718b-433c-9d24-06d9565234c8" (UID: "2442d861-718b-433c-9d24-06d9565234c8"). InnerVolumeSpecName "kube-api-access-kd62c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.674041 4886 generic.go:334] "Generic (PLEG): container finished" podID="2442d861-718b-433c-9d24-06d9565234c8" containerID="a60577fd198a8e2f4abc54a65d8ee321433f9c7eda1eeee82d4b48c90e3bddac" exitCode=0 Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.674095 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" event={"ID":"2442d861-718b-433c-9d24-06d9565234c8","Type":"ContainerDied","Data":"a60577fd198a8e2f4abc54a65d8ee321433f9c7eda1eeee82d4b48c90e3bddac"} Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.674166 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" event={"ID":"2442d861-718b-433c-9d24-06d9565234c8","Type":"ContainerDied","Data":"ec04765fdd7257d6e0d2ccd18c18864257f9732d21f25317ddd7483d99583c93"} Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.674189 4886 scope.go:117] "RemoveContainer" containerID="a60577fd198a8e2f4abc54a65d8ee321433f9c7eda1eeee82d4b48c90e3bddac" Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.674132 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-77vlz" Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.709106 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2442d861-718b-433c-9d24-06d9565234c8" (UID: "2442d861-718b-433c-9d24-06d9565234c8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.714496 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-config" (OuterVolumeSpecName: "config") pod "2442d861-718b-433c-9d24-06d9565234c8" (UID: "2442d861-718b-433c-9d24-06d9565234c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.715681 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2442d861-718b-433c-9d24-06d9565234c8" (UID: "2442d861-718b-433c-9d24-06d9565234c8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.716186 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-ovsdbserver-sb\") pod \"2442d861-718b-433c-9d24-06d9565234c8\" (UID: \"2442d861-718b-433c-9d24-06d9565234c8\") " Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.717216 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.717242 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-config\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.717254 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd62c\" (UniqueName: \"kubernetes.io/projected/2442d861-718b-433c-9d24-06d9565234c8-kube-api-access-kd62c\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:58 crc kubenswrapper[4886]: W0219 21:25:58.718334 4886 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/2442d861-718b-433c-9d24-06d9565234c8/volumes/kubernetes.io~configmap/ovsdbserver-sb Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.718413 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2442d861-718b-433c-9d24-06d9565234c8" (UID: "2442d861-718b-433c-9d24-06d9565234c8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.725681 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2442d861-718b-433c-9d24-06d9565234c8" (UID: "2442d861-718b-433c-9d24-06d9565234c8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.744070 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2442d861-718b-433c-9d24-06d9565234c8" (UID: "2442d861-718b-433c-9d24-06d9565234c8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.755431 4886 scope.go:117] "RemoveContainer" containerID="007d0e2f15a9fbe11361689af48809c5d0732dba9d72a2ca50346dc5b50c6226" Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.779620 4886 scope.go:117] "RemoveContainer" containerID="a60577fd198a8e2f4abc54a65d8ee321433f9c7eda1eeee82d4b48c90e3bddac" Feb 19 21:25:58 crc kubenswrapper[4886]: E0219 21:25:58.780178 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a60577fd198a8e2f4abc54a65d8ee321433f9c7eda1eeee82d4b48c90e3bddac\": container with ID starting with a60577fd198a8e2f4abc54a65d8ee321433f9c7eda1eeee82d4b48c90e3bddac not found: ID does not exist" containerID="a60577fd198a8e2f4abc54a65d8ee321433f9c7eda1eeee82d4b48c90e3bddac" Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.780220 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a60577fd198a8e2f4abc54a65d8ee321433f9c7eda1eeee82d4b48c90e3bddac"} err="failed to get container status \"a60577fd198a8e2f4abc54a65d8ee321433f9c7eda1eeee82d4b48c90e3bddac\": rpc error: code = NotFound desc = could not find container \"a60577fd198a8e2f4abc54a65d8ee321433f9c7eda1eeee82d4b48c90e3bddac\": container with ID starting with a60577fd198a8e2f4abc54a65d8ee321433f9c7eda1eeee82d4b48c90e3bddac not found: ID does not exist" Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.780241 4886 scope.go:117] "RemoveContainer" containerID="007d0e2f15a9fbe11361689af48809c5d0732dba9d72a2ca50346dc5b50c6226" Feb 19 21:25:58 crc kubenswrapper[4886]: E0219 21:25:58.780686 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"007d0e2f15a9fbe11361689af48809c5d0732dba9d72a2ca50346dc5b50c6226\": container with ID starting with 007d0e2f15a9fbe11361689af48809c5d0732dba9d72a2ca50346dc5b50c6226 not found: ID does not exist" containerID="007d0e2f15a9fbe11361689af48809c5d0732dba9d72a2ca50346dc5b50c6226" Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.780726 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"007d0e2f15a9fbe11361689af48809c5d0732dba9d72a2ca50346dc5b50c6226"} err="failed to get container status \"007d0e2f15a9fbe11361689af48809c5d0732dba9d72a2ca50346dc5b50c6226\": rpc error: code = NotFound desc = could not find container \"007d0e2f15a9fbe11361689af48809c5d0732dba9d72a2ca50346dc5b50c6226\": container with ID starting with 007d0e2f15a9fbe11361689af48809c5d0732dba9d72a2ca50346dc5b50c6226 not found: ID does not exist" Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.819991 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.820029 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:58 crc kubenswrapper[4886]: I0219 21:25:58.820042 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2442d861-718b-433c-9d24-06d9565234c8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 21:25:59 crc kubenswrapper[4886]: I0219 21:25:59.008106 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-77vlz"] Feb 19 21:25:59 crc kubenswrapper[4886]: I0219 21:25:59.021987 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-77vlz"] Feb 19 21:26:00 crc kubenswrapper[4886]: I0219 21:26:00.624521 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2442d861-718b-433c-9d24-06d9565234c8" path="/var/lib/kubelet/pods/2442d861-718b-433c-9d24-06d9565234c8/volumes" Feb 19 21:26:03 crc kubenswrapper[4886]: I0219 21:26:03.095203 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm"] Feb 19 21:26:03 crc kubenswrapper[4886]: E0219 21:26:03.096610 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2442d861-718b-433c-9d24-06d9565234c8" containerName="init" Feb 19 21:26:03 crc kubenswrapper[4886]: I0219 21:26:03.096630 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2442d861-718b-433c-9d24-06d9565234c8" containerName="init" Feb 19 21:26:03 crc kubenswrapper[4886]: E0219 21:26:03.096669 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2442d861-718b-433c-9d24-06d9565234c8" containerName="dnsmasq-dns" Feb 19 21:26:03 crc kubenswrapper[4886]: I0219 21:26:03.096679 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2442d861-718b-433c-9d24-06d9565234c8" containerName="dnsmasq-dns" Feb 19 21:26:03 crc kubenswrapper[4886]: I0219 21:26:03.096986 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2442d861-718b-433c-9d24-06d9565234c8" containerName="dnsmasq-dns" Feb 19 21:26:03 crc kubenswrapper[4886]: I0219 21:26:03.097881 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm" Feb 19 21:26:03 crc kubenswrapper[4886]: I0219 21:26:03.102893 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 21:26:03 crc kubenswrapper[4886]: I0219 21:26:03.103124 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 21:26:03 crc kubenswrapper[4886]: I0219 21:26:03.103273 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vq4ls" Feb 19 21:26:03 crc kubenswrapper[4886]: I0219 21:26:03.103381 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 21:26:03 crc kubenswrapper[4886]: I0219 21:26:03.108567 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm"] Feb 19 21:26:03 crc kubenswrapper[4886]: I0219 21:26:03.117560 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6thb\" (UniqueName: \"kubernetes.io/projected/ccc7ab98-2fbb-4461-908c-bd2f346b6250-kube-api-access-z6thb\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm\" (UID: \"ccc7ab98-2fbb-4461-908c-bd2f346b6250\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm" Feb 19 21:26:03 crc kubenswrapper[4886]: I0219 21:26:03.117651 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccc7ab98-2fbb-4461-908c-bd2f346b6250-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm\" (UID: \"ccc7ab98-2fbb-4461-908c-bd2f346b6250\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm" Feb 19 21:26:03 crc kubenswrapper[4886]: I0219 21:26:03.117701 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccc7ab98-2fbb-4461-908c-bd2f346b6250-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm\" (UID: \"ccc7ab98-2fbb-4461-908c-bd2f346b6250\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm" Feb 19 21:26:03 crc kubenswrapper[4886]: I0219 21:26:03.117793 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccc7ab98-2fbb-4461-908c-bd2f346b6250-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm\" (UID: \"ccc7ab98-2fbb-4461-908c-bd2f346b6250\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm" Feb 19 21:26:03 crc kubenswrapper[4886]: I0219 21:26:03.219650 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6thb\" (UniqueName: \"kubernetes.io/projected/ccc7ab98-2fbb-4461-908c-bd2f346b6250-kube-api-access-z6thb\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm\" (UID: \"ccc7ab98-2fbb-4461-908c-bd2f346b6250\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm" Feb 19 21:26:03 crc kubenswrapper[4886]: I0219 21:26:03.219767 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccc7ab98-2fbb-4461-908c-bd2f346b6250-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm\" (UID: \"ccc7ab98-2fbb-4461-908c-bd2f346b6250\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm" Feb 19 21:26:03 crc kubenswrapper[4886]: I0219 21:26:03.219828 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccc7ab98-2fbb-4461-908c-bd2f346b6250-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm\" (UID: \"ccc7ab98-2fbb-4461-908c-bd2f346b6250\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm" Feb 19 21:26:03 crc kubenswrapper[4886]: I0219 21:26:03.219912 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccc7ab98-2fbb-4461-908c-bd2f346b6250-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm\" (UID: \"ccc7ab98-2fbb-4461-908c-bd2f346b6250\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm" Feb 19 21:26:03 crc kubenswrapper[4886]: I0219 21:26:03.225730 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccc7ab98-2fbb-4461-908c-bd2f346b6250-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm\" (UID: \"ccc7ab98-2fbb-4461-908c-bd2f346b6250\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm" Feb 19 21:26:03 crc kubenswrapper[4886]: I0219 21:26:03.226013 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccc7ab98-2fbb-4461-908c-bd2f346b6250-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm\" (UID: \"ccc7ab98-2fbb-4461-908c-bd2f346b6250\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm" Feb 19 21:26:03 crc kubenswrapper[4886]: I0219 21:26:03.226944 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccc7ab98-2fbb-4461-908c-bd2f346b6250-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm\" (UID: \"ccc7ab98-2fbb-4461-908c-bd2f346b6250\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm" Feb 19 21:26:03 crc kubenswrapper[4886]: I0219 21:26:03.242826 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6thb\" (UniqueName: \"kubernetes.io/projected/ccc7ab98-2fbb-4461-908c-bd2f346b6250-kube-api-access-z6thb\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm\" (UID: \"ccc7ab98-2fbb-4461-908c-bd2f346b6250\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm" Feb 19 21:26:03 crc kubenswrapper[4886]: I0219 21:26:03.455946 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm" Feb 19 21:26:03 crc kubenswrapper[4886]: I0219 21:26:03.750363 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-xx9l4" event={"ID":"489598d7-0933-4271-92d3-26a24263c4bc","Type":"ContainerStarted","Data":"853dad30726458ee5a39b1d7e6075ca6062ae495261685ab362f5526a7d55ec0"} Feb 19 21:26:03 crc kubenswrapper[4886]: I0219 21:26:03.788985 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-xx9l4" podStartSLOduration=2.656424434 podStartE2EDuration="42.788961659s" podCreationTimestamp="2026-02-19 21:25:21 +0000 UTC" firstStartedPulling="2026-02-19 21:25:22.623841295 +0000 UTC m=+1553.251684345" lastFinishedPulling="2026-02-19 21:26:02.75637852 +0000 UTC m=+1593.384221570" observedRunningTime="2026-02-19 21:26:03.773802048 +0000 UTC m=+1594.401645098" watchObservedRunningTime="2026-02-19 21:26:03.788961659 +0000 UTC m=+1594.416804709" Feb 19 21:26:04 crc kubenswrapper[4886]: I0219 21:26:04.768671 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm"] Feb 19 21:26:05 crc kubenswrapper[4886]: I0219 21:26:05.775028 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm" event={"ID":"ccc7ab98-2fbb-4461-908c-bd2f346b6250","Type":"ContainerStarted","Data":"eb736358af1a85755239c1993e03df619c8d1a32f5df5f6c13b5a7656964b7ac"} Feb 19 21:26:06 crc kubenswrapper[4886]: I0219 21:26:06.788889 4886 generic.go:334] "Generic (PLEG): container finished" podID="489598d7-0933-4271-92d3-26a24263c4bc" containerID="853dad30726458ee5a39b1d7e6075ca6062ae495261685ab362f5526a7d55ec0" exitCode=0 Feb 19 21:26:06 crc kubenswrapper[4886]: I0219 21:26:06.789239 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-xx9l4" event={"ID":"489598d7-0933-4271-92d3-26a24263c4bc","Type":"ContainerDied","Data":"853dad30726458ee5a39b1d7e6075ca6062ae495261685ab362f5526a7d55ec0"} Feb 19 21:26:08 crc kubenswrapper[4886]: I0219 21:26:08.267686 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-xx9l4" Feb 19 21:26:08 crc kubenswrapper[4886]: I0219 21:26:08.463657 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/489598d7-0933-4271-92d3-26a24263c4bc-config-data\") pod \"489598d7-0933-4271-92d3-26a24263c4bc\" (UID: \"489598d7-0933-4271-92d3-26a24263c4bc\") " Feb 19 21:26:08 crc kubenswrapper[4886]: I0219 21:26:08.463714 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vb8j\" (UniqueName: \"kubernetes.io/projected/489598d7-0933-4271-92d3-26a24263c4bc-kube-api-access-7vb8j\") pod \"489598d7-0933-4271-92d3-26a24263c4bc\" (UID: \"489598d7-0933-4271-92d3-26a24263c4bc\") " Feb 19 21:26:08 crc kubenswrapper[4886]: I0219 21:26:08.463767 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/489598d7-0933-4271-92d3-26a24263c4bc-combined-ca-bundle\") pod \"489598d7-0933-4271-92d3-26a24263c4bc\" (UID: \"489598d7-0933-4271-92d3-26a24263c4bc\") " Feb 19 21:26:08 crc kubenswrapper[4886]: I0219 21:26:08.471005 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/489598d7-0933-4271-92d3-26a24263c4bc-kube-api-access-7vb8j" (OuterVolumeSpecName: "kube-api-access-7vb8j") pod "489598d7-0933-4271-92d3-26a24263c4bc" (UID: "489598d7-0933-4271-92d3-26a24263c4bc"). InnerVolumeSpecName "kube-api-access-7vb8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:26:08 crc kubenswrapper[4886]: I0219 21:26:08.497010 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/489598d7-0933-4271-92d3-26a24263c4bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "489598d7-0933-4271-92d3-26a24263c4bc" (UID: "489598d7-0933-4271-92d3-26a24263c4bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:08 crc kubenswrapper[4886]: I0219 21:26:08.569782 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vb8j\" (UniqueName: \"kubernetes.io/projected/489598d7-0933-4271-92d3-26a24263c4bc-kube-api-access-7vb8j\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:08 crc kubenswrapper[4886]: I0219 21:26:08.570127 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/489598d7-0933-4271-92d3-26a24263c4bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:08 crc kubenswrapper[4886]: I0219 21:26:08.595636 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/489598d7-0933-4271-92d3-26a24263c4bc-config-data" (OuterVolumeSpecName: "config-data") pod "489598d7-0933-4271-92d3-26a24263c4bc" (UID: "489598d7-0933-4271-92d3-26a24263c4bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:08 crc kubenswrapper[4886]: I0219 21:26:08.601247 4886 scope.go:117] "RemoveContainer" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" Feb 19 21:26:08 crc kubenswrapper[4886]: E0219 21:26:08.601710 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:26:08 crc kubenswrapper[4886]: I0219 21:26:08.672450 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/489598d7-0933-4271-92d3-26a24263c4bc-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:08 crc kubenswrapper[4886]: I0219 21:26:08.820223 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-xx9l4" Feb 19 21:26:08 crc kubenswrapper[4886]: I0219 21:26:08.820390 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-xx9l4" event={"ID":"489598d7-0933-4271-92d3-26a24263c4bc","Type":"ContainerDied","Data":"9a62968bcb94e367162710d115fdc8e6f9a2e9943771fe2158570ab5e2a3aefd"} Feb 19 21:26:08 crc kubenswrapper[4886]: I0219 21:26:08.820456 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a62968bcb94e367162710d115fdc8e6f9a2e9943771fe2158570ab5e2a3aefd" Feb 19 21:26:09 crc kubenswrapper[4886]: I0219 21:26:09.988724 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6564fcf45d-4kfd4"] Feb 19 21:26:10 crc kubenswrapper[4886]: E0219 21:26:10.022209 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489598d7-0933-4271-92d3-26a24263c4bc" containerName="heat-db-sync" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.022252 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="489598d7-0933-4271-92d3-26a24263c4bc" containerName="heat-db-sync" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.023038 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="489598d7-0933-4271-92d3-26a24263c4bc" containerName="heat-db-sync" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.025368 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6564fcf45d-4kfd4"] Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.025476 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6564fcf45d-4kfd4" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.063637 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-64d484fb98-j7pc7"] Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.067387 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-64d484fb98-j7pc7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.083301 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7c4bd9778c-6dbw7"] Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.089359 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7c4bd9778c-6dbw7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.104613 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-64d484fb98-j7pc7"] Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.119297 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7c4bd9778c-6dbw7"] Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.217488 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6028ca-f0bb-4d11-abb0-026d15a268aa-internal-tls-certs\") pod \"heat-api-64d484fb98-j7pc7\" (UID: \"7c6028ca-f0bb-4d11-abb0-026d15a268aa\") " pod="openstack/heat-api-64d484fb98-j7pc7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.217535 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd42m\" (UniqueName: \"kubernetes.io/projected/bf3d0d39-fbc5-4008-bac1-611ca62c6519-kube-api-access-rd42m\") pod \"heat-cfnapi-7c4bd9778c-6dbw7\" (UID: \"bf3d0d39-fbc5-4008-bac1-611ca62c6519\") " pod="openstack/heat-cfnapi-7c4bd9778c-6dbw7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.217571 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf3d0d39-fbc5-4008-bac1-611ca62c6519-public-tls-certs\") pod \"heat-cfnapi-7c4bd9778c-6dbw7\" (UID: \"bf3d0d39-fbc5-4008-bac1-611ca62c6519\") " pod="openstack/heat-cfnapi-7c4bd9778c-6dbw7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.217593 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ab06011-b91f-458d-87e8-54d4180aa5b3-config-data-custom\") pod \"heat-engine-6564fcf45d-4kfd4\" (UID: \"9ab06011-b91f-458d-87e8-54d4180aa5b3\") " pod="openstack/heat-engine-6564fcf45d-4kfd4" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.217614 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c4bw\" (UniqueName: \"kubernetes.io/projected/7c6028ca-f0bb-4d11-abb0-026d15a268aa-kube-api-access-6c4bw\") pod \"heat-api-64d484fb98-j7pc7\" (UID: \"7c6028ca-f0bb-4d11-abb0-026d15a268aa\") " pod="openstack/heat-api-64d484fb98-j7pc7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.217649 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c6028ca-f0bb-4d11-abb0-026d15a268aa-combined-ca-bundle\") pod \"heat-api-64d484fb98-j7pc7\" (UID: \"7c6028ca-f0bb-4d11-abb0-026d15a268aa\") " pod="openstack/heat-api-64d484fb98-j7pc7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.217668 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf3d0d39-fbc5-4008-bac1-611ca62c6519-config-data-custom\") pod \"heat-cfnapi-7c4bd9778c-6dbw7\" (UID: \"bf3d0d39-fbc5-4008-bac1-611ca62c6519\") " pod="openstack/heat-cfnapi-7c4bd9778c-6dbw7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.217704 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6028ca-f0bb-4d11-abb0-026d15a268aa-public-tls-certs\") pod \"heat-api-64d484fb98-j7pc7\" (UID: \"7c6028ca-f0bb-4d11-abb0-026d15a268aa\") " pod="openstack/heat-api-64d484fb98-j7pc7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.217862 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ab06011-b91f-458d-87e8-54d4180aa5b3-config-data\") pod \"heat-engine-6564fcf45d-4kfd4\" (UID: \"9ab06011-b91f-458d-87e8-54d4180aa5b3\") " pod="openstack/heat-engine-6564fcf45d-4kfd4" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.217891 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf3d0d39-fbc5-4008-bac1-611ca62c6519-internal-tls-certs\") pod \"heat-cfnapi-7c4bd9778c-6dbw7\" (UID: \"bf3d0d39-fbc5-4008-bac1-611ca62c6519\") " pod="openstack/heat-cfnapi-7c4bd9778c-6dbw7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.217994 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf3d0d39-fbc5-4008-bac1-611ca62c6519-combined-ca-bundle\") pod \"heat-cfnapi-7c4bd9778c-6dbw7\" (UID: \"bf3d0d39-fbc5-4008-bac1-611ca62c6519\") " pod="openstack/heat-cfnapi-7c4bd9778c-6dbw7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.218018 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ab06011-b91f-458d-87e8-54d4180aa5b3-combined-ca-bundle\") pod \"heat-engine-6564fcf45d-4kfd4\" (UID: \"9ab06011-b91f-458d-87e8-54d4180aa5b3\") " pod="openstack/heat-engine-6564fcf45d-4kfd4" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.218236 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf3d0d39-fbc5-4008-bac1-611ca62c6519-config-data\") pod \"heat-cfnapi-7c4bd9778c-6dbw7\" (UID: \"bf3d0d39-fbc5-4008-bac1-611ca62c6519\") " pod="openstack/heat-cfnapi-7c4bd9778c-6dbw7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.218281 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hqz5\" (UniqueName: \"kubernetes.io/projected/9ab06011-b91f-458d-87e8-54d4180aa5b3-kube-api-access-2hqz5\") pod \"heat-engine-6564fcf45d-4kfd4\" (UID: \"9ab06011-b91f-458d-87e8-54d4180aa5b3\") " pod="openstack/heat-engine-6564fcf45d-4kfd4" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.218311 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c6028ca-f0bb-4d11-abb0-026d15a268aa-config-data-custom\") pod \"heat-api-64d484fb98-j7pc7\" (UID: \"7c6028ca-f0bb-4d11-abb0-026d15a268aa\") " pod="openstack/heat-api-64d484fb98-j7pc7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.218345 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c6028ca-f0bb-4d11-abb0-026d15a268aa-config-data\") pod \"heat-api-64d484fb98-j7pc7\" (UID: \"7c6028ca-f0bb-4d11-abb0-026d15a268aa\") " pod="openstack/heat-api-64d484fb98-j7pc7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.321065 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6028ca-f0bb-4d11-abb0-026d15a268aa-public-tls-certs\") pod \"heat-api-64d484fb98-j7pc7\" (UID: \"7c6028ca-f0bb-4d11-abb0-026d15a268aa\") " pod="openstack/heat-api-64d484fb98-j7pc7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.321134 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ab06011-b91f-458d-87e8-54d4180aa5b3-config-data\") pod \"heat-engine-6564fcf45d-4kfd4\" (UID: \"9ab06011-b91f-458d-87e8-54d4180aa5b3\") " pod="openstack/heat-engine-6564fcf45d-4kfd4" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.321169 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf3d0d39-fbc5-4008-bac1-611ca62c6519-internal-tls-certs\") pod \"heat-cfnapi-7c4bd9778c-6dbw7\" (UID: \"bf3d0d39-fbc5-4008-bac1-611ca62c6519\") " pod="openstack/heat-cfnapi-7c4bd9778c-6dbw7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.321203 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf3d0d39-fbc5-4008-bac1-611ca62c6519-combined-ca-bundle\") pod \"heat-cfnapi-7c4bd9778c-6dbw7\" (UID: \"bf3d0d39-fbc5-4008-bac1-611ca62c6519\") " pod="openstack/heat-cfnapi-7c4bd9778c-6dbw7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.321235 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ab06011-b91f-458d-87e8-54d4180aa5b3-combined-ca-bundle\") pod \"heat-engine-6564fcf45d-4kfd4\" (UID: \"9ab06011-b91f-458d-87e8-54d4180aa5b3\") " pod="openstack/heat-engine-6564fcf45d-4kfd4" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.321320 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf3d0d39-fbc5-4008-bac1-611ca62c6519-config-data\") pod \"heat-cfnapi-7c4bd9778c-6dbw7\" (UID: \"bf3d0d39-fbc5-4008-bac1-611ca62c6519\") " pod="openstack/heat-cfnapi-7c4bd9778c-6dbw7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.321349 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hqz5\" (UniqueName: \"kubernetes.io/projected/9ab06011-b91f-458d-87e8-54d4180aa5b3-kube-api-access-2hqz5\") pod \"heat-engine-6564fcf45d-4kfd4\" (UID: \"9ab06011-b91f-458d-87e8-54d4180aa5b3\") " pod="openstack/heat-engine-6564fcf45d-4kfd4" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.321370 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c6028ca-f0bb-4d11-abb0-026d15a268aa-config-data-custom\") pod \"heat-api-64d484fb98-j7pc7\" (UID: \"7c6028ca-f0bb-4d11-abb0-026d15a268aa\") " pod="openstack/heat-api-64d484fb98-j7pc7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.321413 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c6028ca-f0bb-4d11-abb0-026d15a268aa-config-data\") pod \"heat-api-64d484fb98-j7pc7\" (UID: \"7c6028ca-f0bb-4d11-abb0-026d15a268aa\") " pod="openstack/heat-api-64d484fb98-j7pc7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.321481 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6028ca-f0bb-4d11-abb0-026d15a268aa-internal-tls-certs\") pod \"heat-api-64d484fb98-j7pc7\" (UID: \"7c6028ca-f0bb-4d11-abb0-026d15a268aa\") " pod="openstack/heat-api-64d484fb98-j7pc7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.321503 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd42m\" (UniqueName: \"kubernetes.io/projected/bf3d0d39-fbc5-4008-bac1-611ca62c6519-kube-api-access-rd42m\") pod \"heat-cfnapi-7c4bd9778c-6dbw7\" (UID: \"bf3d0d39-fbc5-4008-bac1-611ca62c6519\") " pod="openstack/heat-cfnapi-7c4bd9778c-6dbw7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.321544 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf3d0d39-fbc5-4008-bac1-611ca62c6519-public-tls-certs\") pod \"heat-cfnapi-7c4bd9778c-6dbw7\" (UID: \"bf3d0d39-fbc5-4008-bac1-611ca62c6519\") " pod="openstack/heat-cfnapi-7c4bd9778c-6dbw7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.321569 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ab06011-b91f-458d-87e8-54d4180aa5b3-config-data-custom\") pod \"heat-engine-6564fcf45d-4kfd4\" (UID: \"9ab06011-b91f-458d-87e8-54d4180aa5b3\") " pod="openstack/heat-engine-6564fcf45d-4kfd4" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.321597 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c4bw\" (UniqueName: \"kubernetes.io/projected/7c6028ca-f0bb-4d11-abb0-026d15a268aa-kube-api-access-6c4bw\") pod \"heat-api-64d484fb98-j7pc7\" (UID: \"7c6028ca-f0bb-4d11-abb0-026d15a268aa\") " pod="openstack/heat-api-64d484fb98-j7pc7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.321641 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c6028ca-f0bb-4d11-abb0-026d15a268aa-combined-ca-bundle\") pod \"heat-api-64d484fb98-j7pc7\" (UID: \"7c6028ca-f0bb-4d11-abb0-026d15a268aa\") " pod="openstack/heat-api-64d484fb98-j7pc7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.321672 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf3d0d39-fbc5-4008-bac1-611ca62c6519-config-data-custom\") pod \"heat-cfnapi-7c4bd9778c-6dbw7\" (UID: \"bf3d0d39-fbc5-4008-bac1-611ca62c6519\") " pod="openstack/heat-cfnapi-7c4bd9778c-6dbw7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.332253 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf3d0d39-fbc5-4008-bac1-611ca62c6519-config-data\") pod \"heat-cfnapi-7c4bd9778c-6dbw7\" (UID: \"bf3d0d39-fbc5-4008-bac1-611ca62c6519\") " pod="openstack/heat-cfnapi-7c4bd9778c-6dbw7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.333094 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ab06011-b91f-458d-87e8-54d4180aa5b3-combined-ca-bundle\") pod \"heat-engine-6564fcf45d-4kfd4\" (UID: \"9ab06011-b91f-458d-87e8-54d4180aa5b3\") " pod="openstack/heat-engine-6564fcf45d-4kfd4" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.333370 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c6028ca-f0bb-4d11-abb0-026d15a268aa-config-data\") pod \"heat-api-64d484fb98-j7pc7\" (UID: \"7c6028ca-f0bb-4d11-abb0-026d15a268aa\") " pod="openstack/heat-api-64d484fb98-j7pc7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.334024 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ab06011-b91f-458d-87e8-54d4180aa5b3-config-data-custom\") pod \"heat-engine-6564fcf45d-4kfd4\" (UID: \"9ab06011-b91f-458d-87e8-54d4180aa5b3\") " pod="openstack/heat-engine-6564fcf45d-4kfd4" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.340463 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd42m\" (UniqueName: \"kubernetes.io/projected/bf3d0d39-fbc5-4008-bac1-611ca62c6519-kube-api-access-rd42m\") pod \"heat-cfnapi-7c4bd9778c-6dbw7\" (UID: \"bf3d0d39-fbc5-4008-bac1-611ca62c6519\") " pod="openstack/heat-cfnapi-7c4bd9778c-6dbw7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.340907 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c6028ca-f0bb-4d11-abb0-026d15a268aa-combined-ca-bundle\") pod \"heat-api-64d484fb98-j7pc7\" (UID: \"7c6028ca-f0bb-4d11-abb0-026d15a268aa\") " pod="openstack/heat-api-64d484fb98-j7pc7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.342900 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6028ca-f0bb-4d11-abb0-026d15a268aa-public-tls-certs\") pod \"heat-api-64d484fb98-j7pc7\" (UID: \"7c6028ca-f0bb-4d11-abb0-026d15a268aa\") " pod="openstack/heat-api-64d484fb98-j7pc7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.343060 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf3d0d39-fbc5-4008-bac1-611ca62c6519-config-data-custom\") pod \"heat-cfnapi-7c4bd9778c-6dbw7\" (UID: \"bf3d0d39-fbc5-4008-bac1-611ca62c6519\") " pod="openstack/heat-cfnapi-7c4bd9778c-6dbw7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.343272 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf3d0d39-fbc5-4008-bac1-611ca62c6519-public-tls-certs\") pod \"heat-cfnapi-7c4bd9778c-6dbw7\" (UID: \"bf3d0d39-fbc5-4008-bac1-611ca62c6519\") " pod="openstack/heat-cfnapi-7c4bd9778c-6dbw7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.343912 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c6028ca-f0bb-4d11-abb0-026d15a268aa-config-data-custom\") pod \"heat-api-64d484fb98-j7pc7\" (UID: \"7c6028ca-f0bb-4d11-abb0-026d15a268aa\") " pod="openstack/heat-api-64d484fb98-j7pc7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.344274 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c4bw\" (UniqueName: \"kubernetes.io/projected/7c6028ca-f0bb-4d11-abb0-026d15a268aa-kube-api-access-6c4bw\") pod \"heat-api-64d484fb98-j7pc7\" (UID: \"7c6028ca-f0bb-4d11-abb0-026d15a268aa\") " pod="openstack/heat-api-64d484fb98-j7pc7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.344408 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6028ca-f0bb-4d11-abb0-026d15a268aa-internal-tls-certs\") pod \"heat-api-64d484fb98-j7pc7\" (UID: \"7c6028ca-f0bb-4d11-abb0-026d15a268aa\") " pod="openstack/heat-api-64d484fb98-j7pc7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.344427 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf3d0d39-fbc5-4008-bac1-611ca62c6519-internal-tls-certs\") pod \"heat-cfnapi-7c4bd9778c-6dbw7\" (UID: \"bf3d0d39-fbc5-4008-bac1-611ca62c6519\") " pod="openstack/heat-cfnapi-7c4bd9778c-6dbw7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.352509 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ab06011-b91f-458d-87e8-54d4180aa5b3-config-data\") pod \"heat-engine-6564fcf45d-4kfd4\" (UID: \"9ab06011-b91f-458d-87e8-54d4180aa5b3\") " pod="openstack/heat-engine-6564fcf45d-4kfd4" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.355624 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hqz5\" (UniqueName: \"kubernetes.io/projected/9ab06011-b91f-458d-87e8-54d4180aa5b3-kube-api-access-2hqz5\") pod \"heat-engine-6564fcf45d-4kfd4\" (UID: \"9ab06011-b91f-458d-87e8-54d4180aa5b3\") " pod="openstack/heat-engine-6564fcf45d-4kfd4" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.356602 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf3d0d39-fbc5-4008-bac1-611ca62c6519-combined-ca-bundle\") pod \"heat-cfnapi-7c4bd9778c-6dbw7\" (UID: \"bf3d0d39-fbc5-4008-bac1-611ca62c6519\") " pod="openstack/heat-cfnapi-7c4bd9778c-6dbw7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.365679 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6564fcf45d-4kfd4" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.384819 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-64d484fb98-j7pc7" Feb 19 21:26:10 crc kubenswrapper[4886]: I0219 21:26:10.406752 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7c4bd9778c-6dbw7" Feb 19 21:26:11 crc kubenswrapper[4886]: I0219 21:26:11.619139 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 21:26:15 crc kubenswrapper[4886]: W0219 21:26:15.474030 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf3d0d39_fbc5_4008_bac1_611ca62c6519.slice/crio-b7a8dfd11058fcd532d410785c4c240e395e83a0fbe5d62fda40623a45afd7cb WatchSource:0}: Error finding container b7a8dfd11058fcd532d410785c4c240e395e83a0fbe5d62fda40623a45afd7cb: Status 404 returned error can't find the container with id b7a8dfd11058fcd532d410785c4c240e395e83a0fbe5d62fda40623a45afd7cb Feb 19 21:26:15 crc kubenswrapper[4886]: I0219 21:26:15.478752 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7c4bd9778c-6dbw7"] Feb 19 21:26:15 crc kubenswrapper[4886]: I0219 21:26:15.651779 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-64d484fb98-j7pc7"] Feb 19 21:26:15 crc kubenswrapper[4886]: I0219 21:26:15.682846 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6564fcf45d-4kfd4"] Feb 19 21:26:15 crc kubenswrapper[4886]: I0219 21:26:15.950015 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c4bd9778c-6dbw7" event={"ID":"bf3d0d39-fbc5-4008-bac1-611ca62c6519","Type":"ContainerStarted","Data":"b7a8dfd11058fcd532d410785c4c240e395e83a0fbe5d62fda40623a45afd7cb"} Feb 19 21:26:15 crc kubenswrapper[4886]: I0219 21:26:15.955422 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm" event={"ID":"ccc7ab98-2fbb-4461-908c-bd2f346b6250","Type":"ContainerStarted","Data":"b5c66fc482cec96b1a0fb01d25c86a460dbba600561a85bdd43b7e1155e10ad8"} Feb 19 21:26:15 crc kubenswrapper[4886]: I0219 21:26:15.961587 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6564fcf45d-4kfd4" event={"ID":"9ab06011-b91f-458d-87e8-54d4180aa5b3","Type":"ContainerStarted","Data":"a41c0130c5633eca91a970bed611a1541b053476b580801272560531b5de99a4"} Feb 19 21:26:15 crc kubenswrapper[4886]: I0219 21:26:15.967287 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5","Type":"ContainerStarted","Data":"bd67cce4e6dbfdc1339dd425114735eac488c1569337e3c9697ec44a066cd4ee"} Feb 19 21:26:15 crc kubenswrapper[4886]: I0219 21:26:15.968932 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-64d484fb98-j7pc7" event={"ID":"7c6028ca-f0bb-4d11-abb0-026d15a268aa","Type":"ContainerStarted","Data":"d4a2339e4b63f81a8d56515b4c784e2070eaf51ee1247f64e1d8276116aca1b5"} Feb 19 21:26:15 crc kubenswrapper[4886]: I0219 21:26:15.982618 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm" podStartSLOduration=2.515946049 podStartE2EDuration="12.982599269s" podCreationTimestamp="2026-02-19 21:26:03 +0000 UTC" firstStartedPulling="2026-02-19 21:26:04.772027226 +0000 UTC m=+1595.399870276" lastFinishedPulling="2026-02-19 21:26:15.238680446 +0000 UTC m=+1605.866523496" observedRunningTime="2026-02-19 21:26:15.974004179 +0000 UTC m=+1606.601847229" watchObservedRunningTime="2026-02-19 21:26:15.982599269 +0000 UTC m=+1606.610442319" Feb 19 21:26:16 crc kubenswrapper[4886]: I0219 21:26:15.999743 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.173130866 podStartE2EDuration="48.999726388s" podCreationTimestamp="2026-02-19 21:25:27 +0000 UTC" firstStartedPulling="2026-02-19 21:25:28.412657968 +0000 UTC m=+1559.040501018" lastFinishedPulling="2026-02-19 21:26:15.23925347 +0000 UTC m=+1605.867096540" observedRunningTime="2026-02-19 21:26:15.99735681 +0000 UTC m=+1606.625199860" watchObservedRunningTime="2026-02-19 21:26:15.999726388 +0000 UTC m=+1606.627569438" Feb 19 21:26:16 crc kubenswrapper[4886]: I0219 21:26:16.986768 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6564fcf45d-4kfd4" event={"ID":"9ab06011-b91f-458d-87e8-54d4180aa5b3","Type":"ContainerStarted","Data":"86fe38c72e1c41c315548cd280c4e19b938bf79158594c561dfdf6f56c03b924"} Feb 19 21:26:17 crc kubenswrapper[4886]: I0219 21:26:17.020056 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6564fcf45d-4kfd4" podStartSLOduration=8.020030467 podStartE2EDuration="8.020030467s" podCreationTimestamp="2026-02-19 21:26:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:26:17.002707993 +0000 UTC m=+1607.630551043" watchObservedRunningTime="2026-02-19 21:26:17.020030467 +0000 UTC m=+1607.647873507" Feb 19 21:26:17 crc kubenswrapper[4886]: I0219 21:26:17.996567 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6564fcf45d-4kfd4" Feb 19 21:26:21 crc kubenswrapper[4886]: I0219 21:26:21.034493 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-64d484fb98-j7pc7" event={"ID":"7c6028ca-f0bb-4d11-abb0-026d15a268aa","Type":"ContainerStarted","Data":"4f2e1f6168b221c6cfb3e94e624717b0798d6e9c5d5a94b958fb66b1b56579c5"} Feb 19 21:26:21 crc kubenswrapper[4886]: I0219 21:26:21.035185 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-64d484fb98-j7pc7" Feb 19 21:26:21 crc kubenswrapper[4886]: I0219 21:26:21.038031 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c4bd9778c-6dbw7" event={"ID":"bf3d0d39-fbc5-4008-bac1-611ca62c6519","Type":"ContainerStarted","Data":"d19db50dfb0fd1f36bcc26e4839a22af877b9266d253e459f44ea2779406a78a"} Feb 19 21:26:21 crc kubenswrapper[4886]: I0219 21:26:21.038482 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7c4bd9778c-6dbw7" Feb 19 21:26:21 crc kubenswrapper[4886]: I0219 21:26:21.064313 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-64d484fb98-j7pc7" podStartSLOduration=7.6494941 podStartE2EDuration="12.064294048s" podCreationTimestamp="2026-02-19 21:26:09 +0000 UTC" firstStartedPulling="2026-02-19 21:26:15.666413561 +0000 UTC m=+1606.294256611" lastFinishedPulling="2026-02-19 21:26:20.081213509 +0000 UTC m=+1610.709056559" observedRunningTime="2026-02-19 21:26:21.052150531 +0000 UTC m=+1611.679993581" watchObservedRunningTime="2026-02-19 21:26:21.064294048 +0000 UTC m=+1611.692137098" Feb 19 21:26:21 crc kubenswrapper[4886]: I0219 21:26:21.087049 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7c4bd9778c-6dbw7" podStartSLOduration=7.486843584 podStartE2EDuration="12.087030754s" podCreationTimestamp="2026-02-19 21:26:09 +0000 UTC" firstStartedPulling="2026-02-19 21:26:15.47736965 +0000 UTC m=+1606.105212700" lastFinishedPulling="2026-02-19 21:26:20.07755678 +0000 UTC m=+1610.705399870" observedRunningTime="2026-02-19 21:26:21.075673406 +0000 UTC m=+1611.703516456" watchObservedRunningTime="2026-02-19 21:26:21.087030754 +0000 UTC m=+1611.714873804" Feb 19 21:26:21 crc kubenswrapper[4886]: I0219 21:26:21.601689 4886 scope.go:117] "RemoveContainer" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" Feb 19 21:26:21 crc kubenswrapper[4886]: E0219 21:26:21.602433 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:26:25 crc kubenswrapper[4886]: I0219 21:26:25.096810 4886 generic.go:334] "Generic (PLEG): container finished" podID="dfc18bff-58ba-4eec-9de7-bca4e9dd55e4" containerID="67347b40847fca0f904fa26ecaf00d4373328a21a1a904ac26f3dafaf0d09d29" exitCode=0 Feb 19 21:26:25 crc kubenswrapper[4886]: I0219 21:26:25.096861 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4","Type":"ContainerDied","Data":"67347b40847fca0f904fa26ecaf00d4373328a21a1a904ac26f3dafaf0d09d29"} Feb 19 21:26:25 crc kubenswrapper[4886]: I0219 21:26:25.104288 4886 generic.go:334] "Generic (PLEG): container finished" podID="506a0730-319d-41d0-bedb-e27054002d43" containerID="2970fa6953df272aa1854aa93cba1ac52bc052a84e466cfe13c4115ee40d1838" exitCode=0 Feb 19 21:26:25 crc kubenswrapper[4886]: I0219 21:26:25.104330 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"506a0730-319d-41d0-bedb-e27054002d43","Type":"ContainerDied","Data":"2970fa6953df272aa1854aa93cba1ac52bc052a84e466cfe13c4115ee40d1838"} Feb 19 21:26:26 crc kubenswrapper[4886]: I0219 21:26:26.120198 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dfc18bff-58ba-4eec-9de7-bca4e9dd55e4","Type":"ContainerStarted","Data":"9551c5d53ba4d219b9b59e02219672cfbfd49ee5f40765d1272ee3bd479ea69c"} Feb 19 21:26:26 crc kubenswrapper[4886]: I0219 21:26:26.120994 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:26:26 crc kubenswrapper[4886]: I0219 21:26:26.124718 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"506a0730-319d-41d0-bedb-e27054002d43","Type":"ContainerStarted","Data":"5be7020245285bf83133e5c20a8064c6b3d39185cd685b982afca740262b3f00"} Feb 19 21:26:26 crc kubenswrapper[4886]: I0219 21:26:26.124975 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 19 21:26:26 crc kubenswrapper[4886]: I0219 21:26:26.159466 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=40.159442469 podStartE2EDuration="40.159442469s" podCreationTimestamp="2026-02-19 21:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:26:26.152470988 +0000 UTC m=+1616.780314038" watchObservedRunningTime="2026-02-19 21:26:26.159442469 +0000 UTC m=+1616.787285559" Feb 19 21:26:26 crc kubenswrapper[4886]: I0219 21:26:26.185431 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=47.185417196 podStartE2EDuration="47.185417196s" podCreationTimestamp="2026-02-19 21:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:26:26.183474279 +0000 UTC m=+1616.811317349" watchObservedRunningTime="2026-02-19 21:26:26.185417196 +0000 UTC m=+1616.813260246" Feb 19 21:26:27 crc kubenswrapper[4886]: I0219 21:26:27.211022 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-64d484fb98-j7pc7" Feb 19 21:26:27 crc kubenswrapper[4886]: I0219 21:26:27.284729 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-7c4bd9778c-6dbw7" Feb 19 21:26:27 crc kubenswrapper[4886]: I0219 21:26:27.320646 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6d7bdc66c5-25hmr"] Feb 19 21:26:27 crc kubenswrapper[4886]: I0219 21:26:27.320856 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6d7bdc66c5-25hmr" podUID="c2dbc6be-5448-4402-b63d-240f82bb94de" containerName="heat-api" containerID="cri-o://89053bb196140a9f6436b23debe8e29c8910cd56c9caf6579c26e7556eb9bc55" gracePeriod=60 Feb 19 21:26:27 crc kubenswrapper[4886]: I0219 21:26:27.365917 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7b8b45588b-z55k4"] Feb 19 21:26:27 crc kubenswrapper[4886]: I0219 21:26:27.366219 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-7b8b45588b-z55k4" podUID="70bf22da-3158-430f-9485-fdd03fd4b6ff" containerName="heat-cfnapi" containerID="cri-o://7e787a7817655415e497bd3652bf130deef61c88ec151456a34f41264170d90c" gracePeriod=60 Feb 19 21:26:27 crc kubenswrapper[4886]: E0219 21:26:27.751328 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccc7ab98_2fbb_4461_908c_bd2f346b6250.slice/crio-conmon-b5c66fc482cec96b1a0fb01d25c86a460dbba600561a85bdd43b7e1155e10ad8.scope\": RecentStats: unable to find data in memory cache]" Feb 19 21:26:28 crc kubenswrapper[4886]: I0219 21:26:28.162815 4886 generic.go:334] "Generic (PLEG): container finished" podID="ccc7ab98-2fbb-4461-908c-bd2f346b6250" containerID="b5c66fc482cec96b1a0fb01d25c86a460dbba600561a85bdd43b7e1155e10ad8" exitCode=0 Feb 19 21:26:28 crc kubenswrapper[4886]: I0219 21:26:28.162873 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm" event={"ID":"ccc7ab98-2fbb-4461-908c-bd2f346b6250","Type":"ContainerDied","Data":"b5c66fc482cec96b1a0fb01d25c86a460dbba600561a85bdd43b7e1155e10ad8"} Feb 19 21:26:29 crc kubenswrapper[4886]: I0219 21:26:29.736033 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm" Feb 19 21:26:29 crc kubenswrapper[4886]: I0219 21:26:29.779687 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccc7ab98-2fbb-4461-908c-bd2f346b6250-ssh-key-openstack-edpm-ipam\") pod \"ccc7ab98-2fbb-4461-908c-bd2f346b6250\" (UID: \"ccc7ab98-2fbb-4461-908c-bd2f346b6250\") " Feb 19 21:26:29 crc kubenswrapper[4886]: I0219 21:26:29.779916 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6thb\" (UniqueName: \"kubernetes.io/projected/ccc7ab98-2fbb-4461-908c-bd2f346b6250-kube-api-access-z6thb\") pod \"ccc7ab98-2fbb-4461-908c-bd2f346b6250\" (UID: \"ccc7ab98-2fbb-4461-908c-bd2f346b6250\") " Feb 19 21:26:29 crc kubenswrapper[4886]: I0219 21:26:29.780033 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccc7ab98-2fbb-4461-908c-bd2f346b6250-repo-setup-combined-ca-bundle\") pod \"ccc7ab98-2fbb-4461-908c-bd2f346b6250\" (UID: \"ccc7ab98-2fbb-4461-908c-bd2f346b6250\") " Feb 19 21:26:29 crc kubenswrapper[4886]: I0219 21:26:29.780206 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccc7ab98-2fbb-4461-908c-bd2f346b6250-inventory\") pod \"ccc7ab98-2fbb-4461-908c-bd2f346b6250\" (UID: \"ccc7ab98-2fbb-4461-908c-bd2f346b6250\") " Feb 19 21:26:29 crc kubenswrapper[4886]: I0219 21:26:29.805179 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccc7ab98-2fbb-4461-908c-bd2f346b6250-kube-api-access-z6thb" (OuterVolumeSpecName: "kube-api-access-z6thb") pod "ccc7ab98-2fbb-4461-908c-bd2f346b6250" (UID: "ccc7ab98-2fbb-4461-908c-bd2f346b6250"). InnerVolumeSpecName "kube-api-access-z6thb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:26:29 crc kubenswrapper[4886]: I0219 21:26:29.806846 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccc7ab98-2fbb-4461-908c-bd2f346b6250-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "ccc7ab98-2fbb-4461-908c-bd2f346b6250" (UID: "ccc7ab98-2fbb-4461-908c-bd2f346b6250"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:29 crc kubenswrapper[4886]: I0219 21:26:29.811467 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccc7ab98-2fbb-4461-908c-bd2f346b6250-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ccc7ab98-2fbb-4461-908c-bd2f346b6250" (UID: "ccc7ab98-2fbb-4461-908c-bd2f346b6250"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:29 crc kubenswrapper[4886]: I0219 21:26:29.840420 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccc7ab98-2fbb-4461-908c-bd2f346b6250-inventory" (OuterVolumeSpecName: "inventory") pod "ccc7ab98-2fbb-4461-908c-bd2f346b6250" (UID: "ccc7ab98-2fbb-4461-908c-bd2f346b6250"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:29 crc kubenswrapper[4886]: I0219 21:26:29.914711 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccc7ab98-2fbb-4461-908c-bd2f346b6250-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:29 crc kubenswrapper[4886]: I0219 21:26:29.914759 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccc7ab98-2fbb-4461-908c-bd2f346b6250-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:29 crc kubenswrapper[4886]: I0219 21:26:29.914776 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6thb\" (UniqueName: \"kubernetes.io/projected/ccc7ab98-2fbb-4461-908c-bd2f346b6250-kube-api-access-z6thb\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:29 crc kubenswrapper[4886]: I0219 21:26:29.915007 4886 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccc7ab98-2fbb-4461-908c-bd2f346b6250-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.189432 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm" event={"ID":"ccc7ab98-2fbb-4461-908c-bd2f346b6250","Type":"ContainerDied","Data":"eb736358af1a85755239c1993e03df619c8d1a32f5df5f6c13b5a7656964b7ac"} Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.189845 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb736358af1a85755239c1993e03df619c8d1a32f5df5f6c13b5a7656964b7ac" Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.189517 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lbrlm" Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.348567 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-wq2ql"] Feb 19 21:26:30 crc kubenswrapper[4886]: E0219 21:26:30.349156 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc7ab98-2fbb-4461-908c-bd2f346b6250" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.349183 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc7ab98-2fbb-4461-908c-bd2f346b6250" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.349534 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccc7ab98-2fbb-4461-908c-bd2f346b6250" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.350759 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wq2ql" Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.352684 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vq4ls" Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.353524 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.355154 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.363502 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.371415 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-wq2ql"] Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.424754 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6564fcf45d-4kfd4" Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.427513 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7zd5\" (UniqueName: \"kubernetes.io/projected/11c0c305-4a2b-420b-86c4-8db3e38b7739-kube-api-access-w7zd5\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wq2ql\" (UID: \"11c0c305-4a2b-420b-86c4-8db3e38b7739\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wq2ql" Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.427875 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11c0c305-4a2b-420b-86c4-8db3e38b7739-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wq2ql\" (UID: \"11c0c305-4a2b-420b-86c4-8db3e38b7739\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wq2ql" Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.428096 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11c0c305-4a2b-420b-86c4-8db3e38b7739-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wq2ql\" (UID: \"11c0c305-4a2b-420b-86c4-8db3e38b7739\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wq2ql" Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.486190 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-76667cbdb5-lcq2d"] Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.486443 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-76667cbdb5-lcq2d" podUID="c5304d1d-93a6-42a7-9e2f-e86e83a8e699" containerName="heat-engine" containerID="cri-o://5336fbaa74d89376707a74fbef1aa8baa31e79d42046c56350ac7a30fc030887" gracePeriod=60 Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.532510 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7zd5\" (UniqueName: \"kubernetes.io/projected/11c0c305-4a2b-420b-86c4-8db3e38b7739-kube-api-access-w7zd5\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wq2ql\" (UID: \"11c0c305-4a2b-420b-86c4-8db3e38b7739\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wq2ql" Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.532653 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11c0c305-4a2b-420b-86c4-8db3e38b7739-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wq2ql\" (UID: \"11c0c305-4a2b-420b-86c4-8db3e38b7739\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wq2ql" Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.532780 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11c0c305-4a2b-420b-86c4-8db3e38b7739-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wq2ql\" (UID: \"11c0c305-4a2b-420b-86c4-8db3e38b7739\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wq2ql" Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.538535 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.538683 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.547988 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11c0c305-4a2b-420b-86c4-8db3e38b7739-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wq2ql\" (UID: \"11c0c305-4a2b-420b-86c4-8db3e38b7739\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wq2ql" Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.553789 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11c0c305-4a2b-420b-86c4-8db3e38b7739-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wq2ql\" (UID: \"11c0c305-4a2b-420b-86c4-8db3e38b7739\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wq2ql" Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.562571 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7zd5\" (UniqueName: \"kubernetes.io/projected/11c0c305-4a2b-420b-86c4-8db3e38b7739-kube-api-access-w7zd5\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wq2ql\" (UID: \"11c0c305-4a2b-420b-86c4-8db3e38b7739\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wq2ql" Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.687201 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vq4ls" Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.695439 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wq2ql" Feb 19 21:26:30 crc kubenswrapper[4886]: I0219 21:26:30.846396 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7b8b45588b-z55k4" podUID="70bf22da-3158-430f-9485-fdd03fd4b6ff" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.219:8000/healthcheck\": dial tcp 10.217.0.219:8000: connect: connection refused" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.197778 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d7bdc66c5-25hmr" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.247644 4886 generic.go:334] "Generic (PLEG): container finished" podID="70bf22da-3158-430f-9485-fdd03fd4b6ff" containerID="7e787a7817655415e497bd3652bf130deef61c88ec151456a34f41264170d90c" exitCode=0 Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.247711 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b8b45588b-z55k4" event={"ID":"70bf22da-3158-430f-9485-fdd03fd4b6ff","Type":"ContainerDied","Data":"7e787a7817655415e497bd3652bf130deef61c88ec151456a34f41264170d90c"} Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.267456 4886 generic.go:334] "Generic (PLEG): container finished" podID="c2dbc6be-5448-4402-b63d-240f82bb94de" containerID="89053bb196140a9f6436b23debe8e29c8910cd56c9caf6579c26e7556eb9bc55" exitCode=0 Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.267499 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d7bdc66c5-25hmr" event={"ID":"c2dbc6be-5448-4402-b63d-240f82bb94de","Type":"ContainerDied","Data":"89053bb196140a9f6436b23debe8e29c8910cd56c9caf6579c26e7556eb9bc55"} Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.267524 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d7bdc66c5-25hmr" event={"ID":"c2dbc6be-5448-4402-b63d-240f82bb94de","Type":"ContainerDied","Data":"fd7344445c658b4a70400cfd0b672189e992bb5ac22f7ededed5d083d9244ea0"} Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.267539 4886 scope.go:117] "RemoveContainer" containerID="89053bb196140a9f6436b23debe8e29c8910cd56c9caf6579c26e7556eb9bc55" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.267723 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d7bdc66c5-25hmr" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.337367 4886 scope.go:117] "RemoveContainer" containerID="89053bb196140a9f6436b23debe8e29c8910cd56c9caf6579c26e7556eb9bc55" Feb 19 21:26:31 crc kubenswrapper[4886]: E0219 21:26:31.337966 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89053bb196140a9f6436b23debe8e29c8910cd56c9caf6579c26e7556eb9bc55\": container with ID starting with 89053bb196140a9f6436b23debe8e29c8910cd56c9caf6579c26e7556eb9bc55 not found: ID does not exist" containerID="89053bb196140a9f6436b23debe8e29c8910cd56c9caf6579c26e7556eb9bc55" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.338017 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89053bb196140a9f6436b23debe8e29c8910cd56c9caf6579c26e7556eb9bc55"} err="failed to get container status \"89053bb196140a9f6436b23debe8e29c8910cd56c9caf6579c26e7556eb9bc55\": rpc error: code = NotFound desc = could not find container \"89053bb196140a9f6436b23debe8e29c8910cd56c9caf6579c26e7556eb9bc55\": container with ID starting with 89053bb196140a9f6436b23debe8e29c8910cd56c9caf6579c26e7556eb9bc55 not found: ID does not exist" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.363943 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-public-tls-certs\") pod \"c2dbc6be-5448-4402-b63d-240f82bb94de\" (UID: \"c2dbc6be-5448-4402-b63d-240f82bb94de\") " Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.364065 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-internal-tls-certs\") pod \"c2dbc6be-5448-4402-b63d-240f82bb94de\" (UID: \"c2dbc6be-5448-4402-b63d-240f82bb94de\") " Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.364165 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-config-data-custom\") pod \"c2dbc6be-5448-4402-b63d-240f82bb94de\" (UID: \"c2dbc6be-5448-4402-b63d-240f82bb94de\") " Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.364557 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-config-data\") pod \"c2dbc6be-5448-4402-b63d-240f82bb94de\" (UID: \"c2dbc6be-5448-4402-b63d-240f82bb94de\") " Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.364620 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-combined-ca-bundle\") pod \"c2dbc6be-5448-4402-b63d-240f82bb94de\" (UID: \"c2dbc6be-5448-4402-b63d-240f82bb94de\") " Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.364651 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx6qd\" (UniqueName: \"kubernetes.io/projected/c2dbc6be-5448-4402-b63d-240f82bb94de-kube-api-access-fx6qd\") pod \"c2dbc6be-5448-4402-b63d-240f82bb94de\" (UID: \"c2dbc6be-5448-4402-b63d-240f82bb94de\") " Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.374784 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c2dbc6be-5448-4402-b63d-240f82bb94de" (UID: "c2dbc6be-5448-4402-b63d-240f82bb94de"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.395113 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2dbc6be-5448-4402-b63d-240f82bb94de-kube-api-access-fx6qd" (OuterVolumeSpecName: "kube-api-access-fx6qd") pod "c2dbc6be-5448-4402-b63d-240f82bb94de" (UID: "c2dbc6be-5448-4402-b63d-240f82bb94de"). InnerVolumeSpecName "kube-api-access-fx6qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.417191 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2dbc6be-5448-4402-b63d-240f82bb94de" (UID: "c2dbc6be-5448-4402-b63d-240f82bb94de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.461045 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-config-data" (OuterVolumeSpecName: "config-data") pod "c2dbc6be-5448-4402-b63d-240f82bb94de" (UID: "c2dbc6be-5448-4402-b63d-240f82bb94de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.486060 4886 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.486092 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.486101 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.486109 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx6qd\" (UniqueName: \"kubernetes.io/projected/c2dbc6be-5448-4402-b63d-240f82bb94de-kube-api-access-fx6qd\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.491435 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c2dbc6be-5448-4402-b63d-240f82bb94de" (UID: "c2dbc6be-5448-4402-b63d-240f82bb94de"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.517983 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c2dbc6be-5448-4402-b63d-240f82bb94de" (UID: "c2dbc6be-5448-4402-b63d-240f82bb94de"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.589749 4886 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.589797 4886 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2dbc6be-5448-4402-b63d-240f82bb94de-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.622643 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-wq2ql"] Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.664996 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b8b45588b-z55k4" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.682419 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6d7bdc66c5-25hmr"] Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.694038 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6d7bdc66c5-25hmr"] Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.798928 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-public-tls-certs\") pod \"70bf22da-3158-430f-9485-fdd03fd4b6ff\" (UID: \"70bf22da-3158-430f-9485-fdd03fd4b6ff\") " Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.799351 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-config-data-custom\") pod \"70bf22da-3158-430f-9485-fdd03fd4b6ff\" (UID: \"70bf22da-3158-430f-9485-fdd03fd4b6ff\") " Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.799392 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-internal-tls-certs\") pod \"70bf22da-3158-430f-9485-fdd03fd4b6ff\" (UID: \"70bf22da-3158-430f-9485-fdd03fd4b6ff\") " Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.799474 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-config-data\") pod \"70bf22da-3158-430f-9485-fdd03fd4b6ff\" (UID: \"70bf22da-3158-430f-9485-fdd03fd4b6ff\") " Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.799631 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9t4n\" (UniqueName: \"kubernetes.io/projected/70bf22da-3158-430f-9485-fdd03fd4b6ff-kube-api-access-p9t4n\") pod \"70bf22da-3158-430f-9485-fdd03fd4b6ff\" (UID: \"70bf22da-3158-430f-9485-fdd03fd4b6ff\") " Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.799715 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-combined-ca-bundle\") pod \"70bf22da-3158-430f-9485-fdd03fd4b6ff\" (UID: \"70bf22da-3158-430f-9485-fdd03fd4b6ff\") " Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.805172 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "70bf22da-3158-430f-9485-fdd03fd4b6ff" (UID: "70bf22da-3158-430f-9485-fdd03fd4b6ff"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.806142 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70bf22da-3158-430f-9485-fdd03fd4b6ff-kube-api-access-p9t4n" (OuterVolumeSpecName: "kube-api-access-p9t4n") pod "70bf22da-3158-430f-9485-fdd03fd4b6ff" (UID: "70bf22da-3158-430f-9485-fdd03fd4b6ff"). InnerVolumeSpecName "kube-api-access-p9t4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.843205 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70bf22da-3158-430f-9485-fdd03fd4b6ff" (UID: "70bf22da-3158-430f-9485-fdd03fd4b6ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.877811 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-config-data" (OuterVolumeSpecName: "config-data") pod "70bf22da-3158-430f-9485-fdd03fd4b6ff" (UID: "70bf22da-3158-430f-9485-fdd03fd4b6ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.887456 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "70bf22da-3158-430f-9485-fdd03fd4b6ff" (UID: "70bf22da-3158-430f-9485-fdd03fd4b6ff"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.903444 4886 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.903484 4886 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.903496 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.903506 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9t4n\" (UniqueName: \"kubernetes.io/projected/70bf22da-3158-430f-9485-fdd03fd4b6ff-kube-api-access-p9t4n\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.903520 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:31 crc kubenswrapper[4886]: I0219 21:26:31.905970 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "70bf22da-3158-430f-9485-fdd03fd4b6ff" (UID: "70bf22da-3158-430f-9485-fdd03fd4b6ff"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:32 crc kubenswrapper[4886]: I0219 21:26:32.005607 4886 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70bf22da-3158-430f-9485-fdd03fd4b6ff-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:32 crc kubenswrapper[4886]: I0219 21:26:32.010753 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 21:26:32 crc kubenswrapper[4886]: I0219 21:26:32.282677 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wq2ql" event={"ID":"11c0c305-4a2b-420b-86c4-8db3e38b7739","Type":"ContainerStarted","Data":"c1c30a1450804b211a887617bd3134b7f5ca305309c1ed1f6bc39c3abc559366"} Feb 19 21:26:32 crc kubenswrapper[4886]: I0219 21:26:32.285721 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b8b45588b-z55k4" event={"ID":"70bf22da-3158-430f-9485-fdd03fd4b6ff","Type":"ContainerDied","Data":"f02416e918182c73d4b06fb0164cb53ba9f25f13d3706d93c8ad5cc616deb228"} Feb 19 21:26:32 crc kubenswrapper[4886]: I0219 21:26:32.285760 4886 scope.go:117] "RemoveContainer" containerID="7e787a7817655415e497bd3652bf130deef61c88ec151456a34f41264170d90c" Feb 19 21:26:32 crc kubenswrapper[4886]: I0219 21:26:32.285796 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b8b45588b-z55k4" Feb 19 21:26:32 crc kubenswrapper[4886]: I0219 21:26:32.343768 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7b8b45588b-z55k4"] Feb 19 21:26:32 crc kubenswrapper[4886]: I0219 21:26:32.376428 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7b8b45588b-z55k4"] Feb 19 21:26:32 crc kubenswrapper[4886]: I0219 21:26:32.612787 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70bf22da-3158-430f-9485-fdd03fd4b6ff" path="/var/lib/kubelet/pods/70bf22da-3158-430f-9485-fdd03fd4b6ff/volumes" Feb 19 21:26:32 crc kubenswrapper[4886]: I0219 21:26:32.613626 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2dbc6be-5448-4402-b63d-240f82bb94de" path="/var/lib/kubelet/pods/c2dbc6be-5448-4402-b63d-240f82bb94de/volumes" Feb 19 21:26:33 crc kubenswrapper[4886]: E0219 21:26:33.155621 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5336fbaa74d89376707a74fbef1aa8baa31e79d42046c56350ac7a30fc030887" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 19 21:26:33 crc kubenswrapper[4886]: E0219 21:26:33.162492 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5336fbaa74d89376707a74fbef1aa8baa31e79d42046c56350ac7a30fc030887" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 19 21:26:33 crc kubenswrapper[4886]: E0219 21:26:33.164482 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5336fbaa74d89376707a74fbef1aa8baa31e79d42046c56350ac7a30fc030887" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 19 21:26:33 crc kubenswrapper[4886]: E0219 21:26:33.164523 4886 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-76667cbdb5-lcq2d" podUID="c5304d1d-93a6-42a7-9e2f-e86e83a8e699" containerName="heat-engine" Feb 19 21:26:33 crc kubenswrapper[4886]: I0219 21:26:33.299095 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wq2ql" event={"ID":"11c0c305-4a2b-420b-86c4-8db3e38b7739","Type":"ContainerStarted","Data":"4fec1e21d76d899a3b46e2e53529651ce761dfc6706c364480dce1eb5bfaee8a"} Feb 19 21:26:33 crc kubenswrapper[4886]: I0219 21:26:33.320715 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wq2ql" podStartSLOduration=2.926700925 podStartE2EDuration="3.320692931s" podCreationTimestamp="2026-02-19 21:26:30 +0000 UTC" firstStartedPulling="2026-02-19 21:26:31.614349411 +0000 UTC m=+1622.242192461" lastFinishedPulling="2026-02-19 21:26:32.008341417 +0000 UTC m=+1622.636184467" observedRunningTime="2026-02-19 21:26:33.312211983 +0000 UTC m=+1623.940055033" watchObservedRunningTime="2026-02-19 21:26:33.320692931 +0000 UTC m=+1623.948535981" Feb 19 21:26:35 crc kubenswrapper[4886]: I0219 21:26:35.321119 4886 generic.go:334] "Generic (PLEG): container finished" podID="11c0c305-4a2b-420b-86c4-8db3e38b7739" containerID="4fec1e21d76d899a3b46e2e53529651ce761dfc6706c364480dce1eb5bfaee8a" exitCode=0 Feb 19 21:26:35 crc kubenswrapper[4886]: I0219 21:26:35.321325 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wq2ql" event={"ID":"11c0c305-4a2b-420b-86c4-8db3e38b7739","Type":"ContainerDied","Data":"4fec1e21d76d899a3b46e2e53529651ce761dfc6706c364480dce1eb5bfaee8a"} Feb 19 21:26:36 crc kubenswrapper[4886]: I0219 21:26:36.602956 4886 scope.go:117] "RemoveContainer" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" Feb 19 21:26:36 crc kubenswrapper[4886]: E0219 21:26:36.603670 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:26:36 crc kubenswrapper[4886]: I0219 21:26:36.924826 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wq2ql" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.016723 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11c0c305-4a2b-420b-86c4-8db3e38b7739-ssh-key-openstack-edpm-ipam\") pod \"11c0c305-4a2b-420b-86c4-8db3e38b7739\" (UID: \"11c0c305-4a2b-420b-86c4-8db3e38b7739\") " Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.016809 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11c0c305-4a2b-420b-86c4-8db3e38b7739-inventory\") pod \"11c0c305-4a2b-420b-86c4-8db3e38b7739\" (UID: \"11c0c305-4a2b-420b-86c4-8db3e38b7739\") " Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.016829 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7zd5\" (UniqueName: \"kubernetes.io/projected/11c0c305-4a2b-420b-86c4-8db3e38b7739-kube-api-access-w7zd5\") pod \"11c0c305-4a2b-420b-86c4-8db3e38b7739\" (UID: \"11c0c305-4a2b-420b-86c4-8db3e38b7739\") " Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.027468 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11c0c305-4a2b-420b-86c4-8db3e38b7739-kube-api-access-w7zd5" (OuterVolumeSpecName: "kube-api-access-w7zd5") pod "11c0c305-4a2b-420b-86c4-8db3e38b7739" (UID: "11c0c305-4a2b-420b-86c4-8db3e38b7739"). InnerVolumeSpecName "kube-api-access-w7zd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.051636 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11c0c305-4a2b-420b-86c4-8db3e38b7739-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "11c0c305-4a2b-420b-86c4-8db3e38b7739" (UID: "11c0c305-4a2b-420b-86c4-8db3e38b7739"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.060646 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11c0c305-4a2b-420b-86c4-8db3e38b7739-inventory" (OuterVolumeSpecName: "inventory") pod "11c0c305-4a2b-420b-86c4-8db3e38b7739" (UID: "11c0c305-4a2b-420b-86c4-8db3e38b7739"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.084918 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.121669 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11c0c305-4a2b-420b-86c4-8db3e38b7739-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.121706 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11c0c305-4a2b-420b-86c4-8db3e38b7739-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.121716 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7zd5\" (UniqueName: \"kubernetes.io/projected/11c0c305-4a2b-420b-86c4-8db3e38b7739-kube-api-access-w7zd5\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.342922 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wq2ql" event={"ID":"11c0c305-4a2b-420b-86c4-8db3e38b7739","Type":"ContainerDied","Data":"c1c30a1450804b211a887617bd3134b7f5ca305309c1ed1f6bc39c3abc559366"} Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.342962 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1c30a1450804b211a887617bd3134b7f5ca305309c1ed1f6bc39c3abc559366" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.343003 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wq2ql" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.446363 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-snc24"] Feb 19 21:26:37 crc kubenswrapper[4886]: E0219 21:26:37.446877 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2dbc6be-5448-4402-b63d-240f82bb94de" containerName="heat-api" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.446894 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2dbc6be-5448-4402-b63d-240f82bb94de" containerName="heat-api" Feb 19 21:26:37 crc kubenswrapper[4886]: E0219 21:26:37.446938 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11c0c305-4a2b-420b-86c4-8db3e38b7739" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.446945 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="11c0c305-4a2b-420b-86c4-8db3e38b7739" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 21:26:37 crc kubenswrapper[4886]: E0219 21:26:37.446962 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70bf22da-3158-430f-9485-fdd03fd4b6ff" containerName="heat-cfnapi" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.446968 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="70bf22da-3158-430f-9485-fdd03fd4b6ff" containerName="heat-cfnapi" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.447177 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="70bf22da-3158-430f-9485-fdd03fd4b6ff" containerName="heat-cfnapi" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.447195 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="11c0c305-4a2b-420b-86c4-8db3e38b7739" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.447217 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2dbc6be-5448-4402-b63d-240f82bb94de" containerName="heat-api" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.448042 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-snc24" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.450389 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.450561 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vq4ls" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.450737 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.452347 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.458836 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-snc24"] Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.535210 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdgvr\" (UniqueName: \"kubernetes.io/projected/15a12ee8-44ec-4c1d-9b3b-b78e49eea138-kube-api-access-kdgvr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-snc24\" (UID: \"15a12ee8-44ec-4c1d-9b3b-b78e49eea138\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-snc24" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.535404 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15a12ee8-44ec-4c1d-9b3b-b78e49eea138-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-snc24\" (UID: \"15a12ee8-44ec-4c1d-9b3b-b78e49eea138\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-snc24" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.535624 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a12ee8-44ec-4c1d-9b3b-b78e49eea138-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-snc24\" (UID: \"15a12ee8-44ec-4c1d-9b3b-b78e49eea138\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-snc24" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.535740 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15a12ee8-44ec-4c1d-9b3b-b78e49eea138-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-snc24\" (UID: \"15a12ee8-44ec-4c1d-9b3b-b78e49eea138\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-snc24" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.637807 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a12ee8-44ec-4c1d-9b3b-b78e49eea138-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-snc24\" (UID: \"15a12ee8-44ec-4c1d-9b3b-b78e49eea138\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-snc24" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.637889 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15a12ee8-44ec-4c1d-9b3b-b78e49eea138-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-snc24\" (UID: \"15a12ee8-44ec-4c1d-9b3b-b78e49eea138\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-snc24" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.638001 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdgvr\" (UniqueName: \"kubernetes.io/projected/15a12ee8-44ec-4c1d-9b3b-b78e49eea138-kube-api-access-kdgvr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-snc24\" (UID: \"15a12ee8-44ec-4c1d-9b3b-b78e49eea138\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-snc24" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.638072 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15a12ee8-44ec-4c1d-9b3b-b78e49eea138-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-snc24\" (UID: \"15a12ee8-44ec-4c1d-9b3b-b78e49eea138\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-snc24" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.644498 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15a12ee8-44ec-4c1d-9b3b-b78e49eea138-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-snc24\" (UID: \"15a12ee8-44ec-4c1d-9b3b-b78e49eea138\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-snc24" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.644602 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a12ee8-44ec-4c1d-9b3b-b78e49eea138-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-snc24\" (UID: \"15a12ee8-44ec-4c1d-9b3b-b78e49eea138\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-snc24" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.656501 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15a12ee8-44ec-4c1d-9b3b-b78e49eea138-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-snc24\" (UID: \"15a12ee8-44ec-4c1d-9b3b-b78e49eea138\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-snc24" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.659835 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdgvr\" (UniqueName: \"kubernetes.io/projected/15a12ee8-44ec-4c1d-9b3b-b78e49eea138-kube-api-access-kdgvr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-snc24\" (UID: \"15a12ee8-44ec-4c1d-9b3b-b78e49eea138\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-snc24" Feb 19 21:26:37 crc kubenswrapper[4886]: I0219 21:26:37.767100 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-snc24" Feb 19 21:26:38 crc kubenswrapper[4886]: I0219 21:26:38.417065 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-snc24"] Feb 19 21:26:38 crc kubenswrapper[4886]: W0219 21:26:38.417686 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15a12ee8_44ec_4c1d_9b3b_b78e49eea138.slice/crio-44d1136b8a545bcd9c8bb0faceb2ebff6f7e87e2d515bf745a8dbb085ff18256 WatchSource:0}: Error finding container 44d1136b8a545bcd9c8bb0faceb2ebff6f7e87e2d515bf745a8dbb085ff18256: Status 404 returned error can't find the container with id 44d1136b8a545bcd9c8bb0faceb2ebff6f7e87e2d515bf745a8dbb085ff18256 Feb 19 21:26:39 crc kubenswrapper[4886]: I0219 21:26:39.365558 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-snc24" event={"ID":"15a12ee8-44ec-4c1d-9b3b-b78e49eea138","Type":"ContainerStarted","Data":"4af2358ad0ef35ed5ebe880c2f4693627cf640da7524bd7def0689b2117090f4"} Feb 19 21:26:39 crc kubenswrapper[4886]: I0219 21:26:39.366121 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-snc24" event={"ID":"15a12ee8-44ec-4c1d-9b3b-b78e49eea138","Type":"ContainerStarted","Data":"44d1136b8a545bcd9c8bb0faceb2ebff6f7e87e2d515bf745a8dbb085ff18256"} Feb 19 21:26:39 crc kubenswrapper[4886]: I0219 21:26:39.392806 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-snc24" podStartSLOduration=1.961516095 podStartE2EDuration="2.392783665s" podCreationTimestamp="2026-02-19 21:26:37 +0000 UTC" firstStartedPulling="2026-02-19 21:26:38.419382145 +0000 UTC m=+1629.047225195" lastFinishedPulling="2026-02-19 21:26:38.850649705 +0000 UTC m=+1629.478492765" observedRunningTime="2026-02-19 21:26:39.381671612 +0000 UTC m=+1630.009514662" watchObservedRunningTime="2026-02-19 21:26:39.392783665 +0000 UTC m=+1630.020626715" Feb 19 21:26:39 crc kubenswrapper[4886]: I0219 21:26:39.770504 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 19 21:26:39 crc kubenswrapper[4886]: I0219 21:26:39.855441 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 19 21:26:40 crc kubenswrapper[4886]: I0219 21:26:40.114857 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-cqfbx"] Feb 19 21:26:40 crc kubenswrapper[4886]: I0219 21:26:40.128919 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-cqfbx"] Feb 19 21:26:40 crc kubenswrapper[4886]: I0219 21:26:40.174882 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-zdcl5"] Feb 19 21:26:40 crc kubenswrapper[4886]: I0219 21:26:40.189235 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zdcl5" Feb 19 21:26:40 crc kubenswrapper[4886]: I0219 21:26:40.199091 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 21:26:40 crc kubenswrapper[4886]: I0219 21:26:40.223913 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-zdcl5"] Feb 19 21:26:40 crc kubenswrapper[4886]: I0219 21:26:40.331604 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3c85441-b11d-4dc2-8964-872b7934ef4c-config-data\") pod \"aodh-db-sync-zdcl5\" (UID: \"d3c85441-b11d-4dc2-8964-872b7934ef4c\") " pod="openstack/aodh-db-sync-zdcl5" Feb 19 21:26:40 crc kubenswrapper[4886]: I0219 21:26:40.331654 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nzwb\" (UniqueName: \"kubernetes.io/projected/d3c85441-b11d-4dc2-8964-872b7934ef4c-kube-api-access-6nzwb\") pod \"aodh-db-sync-zdcl5\" (UID: \"d3c85441-b11d-4dc2-8964-872b7934ef4c\") " pod="openstack/aodh-db-sync-zdcl5" Feb 19 21:26:40 crc kubenswrapper[4886]: I0219 21:26:40.331797 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3c85441-b11d-4dc2-8964-872b7934ef4c-combined-ca-bundle\") pod \"aodh-db-sync-zdcl5\" (UID: \"d3c85441-b11d-4dc2-8964-872b7934ef4c\") " pod="openstack/aodh-db-sync-zdcl5" Feb 19 21:26:40 crc kubenswrapper[4886]: I0219 21:26:40.331813 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3c85441-b11d-4dc2-8964-872b7934ef4c-scripts\") pod \"aodh-db-sync-zdcl5\" (UID: \"d3c85441-b11d-4dc2-8964-872b7934ef4c\") " pod="openstack/aodh-db-sync-zdcl5" Feb 19 21:26:40 crc kubenswrapper[4886]: I0219 21:26:40.434189 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3c85441-b11d-4dc2-8964-872b7934ef4c-config-data\") pod \"aodh-db-sync-zdcl5\" (UID: \"d3c85441-b11d-4dc2-8964-872b7934ef4c\") " pod="openstack/aodh-db-sync-zdcl5" Feb 19 21:26:40 crc kubenswrapper[4886]: I0219 21:26:40.434358 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nzwb\" (UniqueName: \"kubernetes.io/projected/d3c85441-b11d-4dc2-8964-872b7934ef4c-kube-api-access-6nzwb\") pod \"aodh-db-sync-zdcl5\" (UID: \"d3c85441-b11d-4dc2-8964-872b7934ef4c\") " pod="openstack/aodh-db-sync-zdcl5" Feb 19 21:26:40 crc kubenswrapper[4886]: I0219 21:26:40.434610 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3c85441-b11d-4dc2-8964-872b7934ef4c-combined-ca-bundle\") pod \"aodh-db-sync-zdcl5\" (UID: \"d3c85441-b11d-4dc2-8964-872b7934ef4c\") " pod="openstack/aodh-db-sync-zdcl5" Feb 19 21:26:40 crc kubenswrapper[4886]: I0219 21:26:40.434630 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3c85441-b11d-4dc2-8964-872b7934ef4c-scripts\") pod \"aodh-db-sync-zdcl5\" (UID: \"d3c85441-b11d-4dc2-8964-872b7934ef4c\") " pod="openstack/aodh-db-sync-zdcl5" Feb 19 21:26:40 crc kubenswrapper[4886]: I0219 21:26:40.442151 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3c85441-b11d-4dc2-8964-872b7934ef4c-combined-ca-bundle\") pod \"aodh-db-sync-zdcl5\" (UID: \"d3c85441-b11d-4dc2-8964-872b7934ef4c\") " pod="openstack/aodh-db-sync-zdcl5" Feb 19 21:26:40 crc kubenswrapper[4886]: I0219 21:26:40.442569 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3c85441-b11d-4dc2-8964-872b7934ef4c-scripts\") pod \"aodh-db-sync-zdcl5\" (UID: \"d3c85441-b11d-4dc2-8964-872b7934ef4c\") " pod="openstack/aodh-db-sync-zdcl5" Feb 19 21:26:40 crc kubenswrapper[4886]: I0219 21:26:40.447035 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3c85441-b11d-4dc2-8964-872b7934ef4c-config-data\") pod \"aodh-db-sync-zdcl5\" (UID: \"d3c85441-b11d-4dc2-8964-872b7934ef4c\") " pod="openstack/aodh-db-sync-zdcl5" Feb 19 21:26:40 crc kubenswrapper[4886]: I0219 21:26:40.467663 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nzwb\" (UniqueName: \"kubernetes.io/projected/d3c85441-b11d-4dc2-8964-872b7934ef4c-kube-api-access-6nzwb\") pod \"aodh-db-sync-zdcl5\" (UID: \"d3c85441-b11d-4dc2-8964-872b7934ef4c\") " pod="openstack/aodh-db-sync-zdcl5" Feb 19 21:26:40 crc kubenswrapper[4886]: I0219 21:26:40.525085 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zdcl5" Feb 19 21:26:40 crc kubenswrapper[4886]: I0219 21:26:40.616494 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aec76ed-0cd6-4468-85a4-23c32dcc5708" path="/var/lib/kubelet/pods/0aec76ed-0cd6-4468-85a4-23c32dcc5708/volumes" Feb 19 21:26:40 crc kubenswrapper[4886]: I0219 21:26:40.745029 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6d7bdc66c5-25hmr" podUID="c2dbc6be-5448-4402-b63d-240f82bb94de" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.218:8004/healthcheck\": dial tcp 10.217.0.218:8004: i/o timeout" Feb 19 21:26:41 crc kubenswrapper[4886]: I0219 21:26:41.095635 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-zdcl5"] Feb 19 21:26:41 crc kubenswrapper[4886]: I0219 21:26:41.395491 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zdcl5" event={"ID":"d3c85441-b11d-4dc2-8964-872b7934ef4c","Type":"ContainerStarted","Data":"ec5c99db979149e777cd477271106dee022d3bd8c522cf76c1f32c8707c7be1c"} Feb 19 21:26:43 crc kubenswrapper[4886]: E0219 21:26:43.155631 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5336fbaa74d89376707a74fbef1aa8baa31e79d42046c56350ac7a30fc030887" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 19 21:26:43 crc kubenswrapper[4886]: E0219 21:26:43.157314 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5336fbaa74d89376707a74fbef1aa8baa31e79d42046c56350ac7a30fc030887" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 19 21:26:43 crc kubenswrapper[4886]: E0219 21:26:43.158924 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5336fbaa74d89376707a74fbef1aa8baa31e79d42046c56350ac7a30fc030887" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 19 21:26:43 crc kubenswrapper[4886]: E0219 21:26:43.158961 4886 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-76667cbdb5-lcq2d" podUID="c5304d1d-93a6-42a7-9e2f-e86e83a8e699" containerName="heat-engine" Feb 19 21:26:44 crc kubenswrapper[4886]: I0219 21:26:44.641403 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="638a08ec-2f97-4b36-919f-9346af224a16" containerName="rabbitmq" containerID="cri-o://2468847edf33194cb7763d1bfea4cb72f84908509d095576c56938e73d2b5637" gracePeriod=604796 Feb 19 21:26:45 crc kubenswrapper[4886]: I0219 21:26:45.458591 4886 generic.go:334] "Generic (PLEG): container finished" podID="c5304d1d-93a6-42a7-9e2f-e86e83a8e699" containerID="5336fbaa74d89376707a74fbef1aa8baa31e79d42046c56350ac7a30fc030887" exitCode=0 Feb 19 21:26:45 crc kubenswrapper[4886]: I0219 21:26:45.458637 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-76667cbdb5-lcq2d" event={"ID":"c5304d1d-93a6-42a7-9e2f-e86e83a8e699","Type":"ContainerDied","Data":"5336fbaa74d89376707a74fbef1aa8baa31e79d42046c56350ac7a30fc030887"} Feb 19 21:26:46 crc kubenswrapper[4886]: I0219 21:26:46.569711 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-76667cbdb5-lcq2d" Feb 19 21:26:46 crc kubenswrapper[4886]: I0219 21:26:46.692526 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxzv5\" (UniqueName: \"kubernetes.io/projected/c5304d1d-93a6-42a7-9e2f-e86e83a8e699-kube-api-access-rxzv5\") pod \"c5304d1d-93a6-42a7-9e2f-e86e83a8e699\" (UID: \"c5304d1d-93a6-42a7-9e2f-e86e83a8e699\") " Feb 19 21:26:46 crc kubenswrapper[4886]: I0219 21:26:46.692586 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5304d1d-93a6-42a7-9e2f-e86e83a8e699-config-data-custom\") pod \"c5304d1d-93a6-42a7-9e2f-e86e83a8e699\" (UID: \"c5304d1d-93a6-42a7-9e2f-e86e83a8e699\") " Feb 19 21:26:46 crc kubenswrapper[4886]: I0219 21:26:46.692673 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5304d1d-93a6-42a7-9e2f-e86e83a8e699-config-data\") pod \"c5304d1d-93a6-42a7-9e2f-e86e83a8e699\" (UID: \"c5304d1d-93a6-42a7-9e2f-e86e83a8e699\") " Feb 19 21:26:46 crc kubenswrapper[4886]: I0219 21:26:46.692788 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5304d1d-93a6-42a7-9e2f-e86e83a8e699-combined-ca-bundle\") pod \"c5304d1d-93a6-42a7-9e2f-e86e83a8e699\" (UID: \"c5304d1d-93a6-42a7-9e2f-e86e83a8e699\") " Feb 19 21:26:46 crc kubenswrapper[4886]: I0219 21:26:46.698820 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5304d1d-93a6-42a7-9e2f-e86e83a8e699-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c5304d1d-93a6-42a7-9e2f-e86e83a8e699" (UID: "c5304d1d-93a6-42a7-9e2f-e86e83a8e699"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:46 crc kubenswrapper[4886]: I0219 21:26:46.703990 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5304d1d-93a6-42a7-9e2f-e86e83a8e699-kube-api-access-rxzv5" (OuterVolumeSpecName: "kube-api-access-rxzv5") pod "c5304d1d-93a6-42a7-9e2f-e86e83a8e699" (UID: "c5304d1d-93a6-42a7-9e2f-e86e83a8e699"). InnerVolumeSpecName "kube-api-access-rxzv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:26:46 crc kubenswrapper[4886]: I0219 21:26:46.733428 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5304d1d-93a6-42a7-9e2f-e86e83a8e699-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5304d1d-93a6-42a7-9e2f-e86e83a8e699" (UID: "c5304d1d-93a6-42a7-9e2f-e86e83a8e699"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:46 crc kubenswrapper[4886]: I0219 21:26:46.752879 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5304d1d-93a6-42a7-9e2f-e86e83a8e699-config-data" (OuterVolumeSpecName: "config-data") pod "c5304d1d-93a6-42a7-9e2f-e86e83a8e699" (UID: "c5304d1d-93a6-42a7-9e2f-e86e83a8e699"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:46 crc kubenswrapper[4886]: I0219 21:26:46.796479 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5304d1d-93a6-42a7-9e2f-e86e83a8e699-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:46 crc kubenswrapper[4886]: I0219 21:26:46.796521 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxzv5\" (UniqueName: \"kubernetes.io/projected/c5304d1d-93a6-42a7-9e2f-e86e83a8e699-kube-api-access-rxzv5\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:46 crc kubenswrapper[4886]: I0219 21:26:46.796534 4886 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5304d1d-93a6-42a7-9e2f-e86e83a8e699-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:46 crc kubenswrapper[4886]: I0219 21:26:46.796545 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5304d1d-93a6-42a7-9e2f-e86e83a8e699-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:47 crc kubenswrapper[4886]: I0219 21:26:47.484369 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-76667cbdb5-lcq2d" Feb 19 21:26:47 crc kubenswrapper[4886]: I0219 21:26:47.484380 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-76667cbdb5-lcq2d" event={"ID":"c5304d1d-93a6-42a7-9e2f-e86e83a8e699","Type":"ContainerDied","Data":"c32a42602854f489e55a91ddbba0b37b161dc4c63dd0015ad6715e3b3bd31182"} Feb 19 21:26:47 crc kubenswrapper[4886]: I0219 21:26:47.484872 4886 scope.go:117] "RemoveContainer" containerID="5336fbaa74d89376707a74fbef1aa8baa31e79d42046c56350ac7a30fc030887" Feb 19 21:26:47 crc kubenswrapper[4886]: I0219 21:26:47.486556 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zdcl5" event={"ID":"d3c85441-b11d-4dc2-8964-872b7934ef4c","Type":"ContainerStarted","Data":"50388f689c236d04897ea174f8b7774e8be5e7752b4ca82e711f1e1cad6502a4"} Feb 19 21:26:47 crc kubenswrapper[4886]: I0219 21:26:47.518413 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-zdcl5" podStartSLOduration=2.402882898 podStartE2EDuration="7.518379344s" podCreationTimestamp="2026-02-19 21:26:40 +0000 UTC" firstStartedPulling="2026-02-19 21:26:41.092970454 +0000 UTC m=+1631.720813514" lastFinishedPulling="2026-02-19 21:26:46.20846692 +0000 UTC m=+1636.836309960" observedRunningTime="2026-02-19 21:26:47.510181553 +0000 UTC m=+1638.138024603" watchObservedRunningTime="2026-02-19 21:26:47.518379344 +0000 UTC m=+1638.146222414" Feb 19 21:26:47 crc kubenswrapper[4886]: I0219 21:26:47.584018 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-76667cbdb5-lcq2d"] Feb 19 21:26:47 crc kubenswrapper[4886]: I0219 21:26:47.603023 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-76667cbdb5-lcq2d"] Feb 19 21:26:48 crc kubenswrapper[4886]: I0219 21:26:48.352646 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="638a08ec-2f97-4b36-919f-9346af224a16" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Feb 19 21:26:48 crc kubenswrapper[4886]: I0219 21:26:48.601933 4886 scope.go:117] "RemoveContainer" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" Feb 19 21:26:48 crc kubenswrapper[4886]: E0219 21:26:48.602214 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:26:48 crc kubenswrapper[4886]: I0219 21:26:48.616233 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5304d1d-93a6-42a7-9e2f-e86e83a8e699" path="/var/lib/kubelet/pods/c5304d1d-93a6-42a7-9e2f-e86e83a8e699/volumes" Feb 19 21:26:49 crc kubenswrapper[4886]: I0219 21:26:49.512057 4886 generic.go:334] "Generic (PLEG): container finished" podID="d3c85441-b11d-4dc2-8964-872b7934ef4c" containerID="50388f689c236d04897ea174f8b7774e8be5e7752b4ca82e711f1e1cad6502a4" exitCode=0 Feb 19 21:26:49 crc kubenswrapper[4886]: I0219 21:26:49.512140 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zdcl5" event={"ID":"d3c85441-b11d-4dc2-8964-872b7934ef4c","Type":"ContainerDied","Data":"50388f689c236d04897ea174f8b7774e8be5e7752b4ca82e711f1e1cad6502a4"} Feb 19 21:26:50 crc kubenswrapper[4886]: I0219 21:26:50.130652 4886 scope.go:117] "RemoveContainer" containerID="dc06bce21f9af90aad092f70022ebda4aafc628f0f98bf1b203504b86416f931" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.036676 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zdcl5" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.164767 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3c85441-b11d-4dc2-8964-872b7934ef4c-scripts\") pod \"d3c85441-b11d-4dc2-8964-872b7934ef4c\" (UID: \"d3c85441-b11d-4dc2-8964-872b7934ef4c\") " Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.164815 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nzwb\" (UniqueName: \"kubernetes.io/projected/d3c85441-b11d-4dc2-8964-872b7934ef4c-kube-api-access-6nzwb\") pod \"d3c85441-b11d-4dc2-8964-872b7934ef4c\" (UID: \"d3c85441-b11d-4dc2-8964-872b7934ef4c\") " Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.164886 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3c85441-b11d-4dc2-8964-872b7934ef4c-config-data\") pod \"d3c85441-b11d-4dc2-8964-872b7934ef4c\" (UID: \"d3c85441-b11d-4dc2-8964-872b7934ef4c\") " Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.165015 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3c85441-b11d-4dc2-8964-872b7934ef4c-combined-ca-bundle\") pod \"d3c85441-b11d-4dc2-8964-872b7934ef4c\" (UID: \"d3c85441-b11d-4dc2-8964-872b7934ef4c\") " Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.171750 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3c85441-b11d-4dc2-8964-872b7934ef4c-kube-api-access-6nzwb" (OuterVolumeSpecName: "kube-api-access-6nzwb") pod "d3c85441-b11d-4dc2-8964-872b7934ef4c" (UID: "d3c85441-b11d-4dc2-8964-872b7934ef4c"). InnerVolumeSpecName "kube-api-access-6nzwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.179091 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3c85441-b11d-4dc2-8964-872b7934ef4c-scripts" (OuterVolumeSpecName: "scripts") pod "d3c85441-b11d-4dc2-8964-872b7934ef4c" (UID: "d3c85441-b11d-4dc2-8964-872b7934ef4c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.213351 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3c85441-b11d-4dc2-8964-872b7934ef4c-config-data" (OuterVolumeSpecName: "config-data") pod "d3c85441-b11d-4dc2-8964-872b7934ef4c" (UID: "d3c85441-b11d-4dc2-8964-872b7934ef4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.215632 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3c85441-b11d-4dc2-8964-872b7934ef4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3c85441-b11d-4dc2-8964-872b7934ef4c" (UID: "d3c85441-b11d-4dc2-8964-872b7934ef4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.267588 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3c85441-b11d-4dc2-8964-872b7934ef4c-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.267618 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nzwb\" (UniqueName: \"kubernetes.io/projected/d3c85441-b11d-4dc2-8964-872b7934ef4c-kube-api-access-6nzwb\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.267629 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3c85441-b11d-4dc2-8964-872b7934ef4c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.267638 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3c85441-b11d-4dc2-8964-872b7934ef4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.318053 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.374862 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/638a08ec-2f97-4b36-919f-9346af224a16-erlang-cookie-secret\") pod \"638a08ec-2f97-4b36-919f-9346af224a16\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.374904 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/638a08ec-2f97-4b36-919f-9346af224a16-rabbitmq-tls\") pod \"638a08ec-2f97-4b36-919f-9346af224a16\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.375667 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a1937355-1b88-44b3-a3dc-d50452e62fb2\") pod \"638a08ec-2f97-4b36-919f-9346af224a16\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.375775 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/638a08ec-2f97-4b36-919f-9346af224a16-rabbitmq-plugins\") pod \"638a08ec-2f97-4b36-919f-9346af224a16\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.375826 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/638a08ec-2f97-4b36-919f-9346af224a16-server-conf\") pod \"638a08ec-2f97-4b36-919f-9346af224a16\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.375855 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/638a08ec-2f97-4b36-919f-9346af224a16-rabbitmq-erlang-cookie\") pod \"638a08ec-2f97-4b36-919f-9346af224a16\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.375916 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/638a08ec-2f97-4b36-919f-9346af224a16-rabbitmq-confd\") pod \"638a08ec-2f97-4b36-919f-9346af224a16\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.376063 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/638a08ec-2f97-4b36-919f-9346af224a16-pod-info\") pod \"638a08ec-2f97-4b36-919f-9346af224a16\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.376135 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/638a08ec-2f97-4b36-919f-9346af224a16-plugins-conf\") pod \"638a08ec-2f97-4b36-919f-9346af224a16\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.376197 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/638a08ec-2f97-4b36-919f-9346af224a16-config-data\") pod \"638a08ec-2f97-4b36-919f-9346af224a16\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.376230 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmz5g\" (UniqueName: \"kubernetes.io/projected/638a08ec-2f97-4b36-919f-9346af224a16-kube-api-access-vmz5g\") pod \"638a08ec-2f97-4b36-919f-9346af224a16\" (UID: \"638a08ec-2f97-4b36-919f-9346af224a16\") " Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.378637 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/638a08ec-2f97-4b36-919f-9346af224a16-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "638a08ec-2f97-4b36-919f-9346af224a16" (UID: "638a08ec-2f97-4b36-919f-9346af224a16"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.378735 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/638a08ec-2f97-4b36-919f-9346af224a16-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "638a08ec-2f97-4b36-919f-9346af224a16" (UID: "638a08ec-2f97-4b36-919f-9346af224a16"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.378877 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/638a08ec-2f97-4b36-919f-9346af224a16-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "638a08ec-2f97-4b36-919f-9346af224a16" (UID: "638a08ec-2f97-4b36-919f-9346af224a16"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.379784 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/638a08ec-2f97-4b36-919f-9346af224a16-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "638a08ec-2f97-4b36-919f-9346af224a16" (UID: "638a08ec-2f97-4b36-919f-9346af224a16"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.381634 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/638a08ec-2f97-4b36-919f-9346af224a16-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "638a08ec-2f97-4b36-919f-9346af224a16" (UID: "638a08ec-2f97-4b36-919f-9346af224a16"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.382252 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/638a08ec-2f97-4b36-919f-9346af224a16-kube-api-access-vmz5g" (OuterVolumeSpecName: "kube-api-access-vmz5g") pod "638a08ec-2f97-4b36-919f-9346af224a16" (UID: "638a08ec-2f97-4b36-919f-9346af224a16"). InnerVolumeSpecName "kube-api-access-vmz5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.390098 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/638a08ec-2f97-4b36-919f-9346af224a16-pod-info" (OuterVolumeSpecName: "pod-info") pod "638a08ec-2f97-4b36-919f-9346af224a16" (UID: "638a08ec-2f97-4b36-919f-9346af224a16"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.445561 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a1937355-1b88-44b3-a3dc-d50452e62fb2" (OuterVolumeSpecName: "persistence") pod "638a08ec-2f97-4b36-919f-9346af224a16" (UID: "638a08ec-2f97-4b36-919f-9346af224a16"). InnerVolumeSpecName "pvc-a1937355-1b88-44b3-a3dc-d50452e62fb2". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.458549 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/638a08ec-2f97-4b36-919f-9346af224a16-config-data" (OuterVolumeSpecName: "config-data") pod "638a08ec-2f97-4b36-919f-9346af224a16" (UID: "638a08ec-2f97-4b36-919f-9346af224a16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.478911 4886 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/638a08ec-2f97-4b36-919f-9346af224a16-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.478956 4886 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/638a08ec-2f97-4b36-919f-9346af224a16-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.479996 4886 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/638a08ec-2f97-4b36-919f-9346af224a16-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.480013 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/638a08ec-2f97-4b36-919f-9346af224a16-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.480025 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmz5g\" (UniqueName: \"kubernetes.io/projected/638a08ec-2f97-4b36-919f-9346af224a16-kube-api-access-vmz5g\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.480038 4886 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/638a08ec-2f97-4b36-919f-9346af224a16-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.480048 4886 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/638a08ec-2f97-4b36-919f-9346af224a16-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.480087 4886 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a1937355-1b88-44b3-a3dc-d50452e62fb2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a1937355-1b88-44b3-a3dc-d50452e62fb2\") on node \"crc\" " Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.480104 4886 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/638a08ec-2f97-4b36-919f-9346af224a16-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.507389 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/638a08ec-2f97-4b36-919f-9346af224a16-server-conf" (OuterVolumeSpecName: "server-conf") pod "638a08ec-2f97-4b36-919f-9346af224a16" (UID: "638a08ec-2f97-4b36-919f-9346af224a16"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.550555 4886 generic.go:334] "Generic (PLEG): container finished" podID="638a08ec-2f97-4b36-919f-9346af224a16" containerID="2468847edf33194cb7763d1bfea4cb72f84908509d095576c56938e73d2b5637" exitCode=0 Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.550614 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"638a08ec-2f97-4b36-919f-9346af224a16","Type":"ContainerDied","Data":"2468847edf33194cb7763d1bfea4cb72f84908509d095576c56938e73d2b5637"} Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.550639 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"638a08ec-2f97-4b36-919f-9346af224a16","Type":"ContainerDied","Data":"d0953245636e3c75001058e15daa0499c12c0f5cae74c535aa6b04a1c709c1cf"} Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.550654 4886 scope.go:117] "RemoveContainer" containerID="2468847edf33194cb7763d1bfea4cb72f84908509d095576c56938e73d2b5637" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.550795 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.554907 4886 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.556002 4886 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a1937355-1b88-44b3-a3dc-d50452e62fb2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a1937355-1b88-44b3-a3dc-d50452e62fb2") on node "crc" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.560058 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zdcl5" event={"ID":"d3c85441-b11d-4dc2-8964-872b7934ef4c","Type":"ContainerDied","Data":"ec5c99db979149e777cd477271106dee022d3bd8c522cf76c1f32c8707c7be1c"} Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.560102 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec5c99db979149e777cd477271106dee022d3bd8c522cf76c1f32c8707c7be1c" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.560156 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zdcl5" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.573893 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/638a08ec-2f97-4b36-919f-9346af224a16-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "638a08ec-2f97-4b36-919f-9346af224a16" (UID: "638a08ec-2f97-4b36-919f-9346af224a16"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.584307 4886 reconciler_common.go:293] "Volume detached for volume \"pvc-a1937355-1b88-44b3-a3dc-d50452e62fb2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a1937355-1b88-44b3-a3dc-d50452e62fb2\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.584364 4886 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/638a08ec-2f97-4b36-919f-9346af224a16-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.584382 4886 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/638a08ec-2f97-4b36-919f-9346af224a16-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.584834 4886 scope.go:117] "RemoveContainer" containerID="739e6580757c5e540a290dbd54c31797b2ccce6cc67cc0b6599a7df349d0575d" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.639524 4886 scope.go:117] "RemoveContainer" containerID="2468847edf33194cb7763d1bfea4cb72f84908509d095576c56938e73d2b5637" Feb 19 21:26:51 crc kubenswrapper[4886]: E0219 21:26:51.640273 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2468847edf33194cb7763d1bfea4cb72f84908509d095576c56938e73d2b5637\": container with ID starting with 2468847edf33194cb7763d1bfea4cb72f84908509d095576c56938e73d2b5637 not found: ID does not exist" containerID="2468847edf33194cb7763d1bfea4cb72f84908509d095576c56938e73d2b5637" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.640376 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2468847edf33194cb7763d1bfea4cb72f84908509d095576c56938e73d2b5637"} err="failed to get container status \"2468847edf33194cb7763d1bfea4cb72f84908509d095576c56938e73d2b5637\": rpc error: code = NotFound desc = could not find container \"2468847edf33194cb7763d1bfea4cb72f84908509d095576c56938e73d2b5637\": container with ID starting with 2468847edf33194cb7763d1bfea4cb72f84908509d095576c56938e73d2b5637 not found: ID does not exist" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.640455 4886 scope.go:117] "RemoveContainer" containerID="739e6580757c5e540a290dbd54c31797b2ccce6cc67cc0b6599a7df349d0575d" Feb 19 21:26:51 crc kubenswrapper[4886]: E0219 21:26:51.640847 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"739e6580757c5e540a290dbd54c31797b2ccce6cc67cc0b6599a7df349d0575d\": container with ID starting with 739e6580757c5e540a290dbd54c31797b2ccce6cc67cc0b6599a7df349d0575d not found: ID does not exist" containerID="739e6580757c5e540a290dbd54c31797b2ccce6cc67cc0b6599a7df349d0575d" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.640917 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"739e6580757c5e540a290dbd54c31797b2ccce6cc67cc0b6599a7df349d0575d"} err="failed to get container status \"739e6580757c5e540a290dbd54c31797b2ccce6cc67cc0b6599a7df349d0575d\": rpc error: code = NotFound desc = could not find container \"739e6580757c5e540a290dbd54c31797b2ccce6cc67cc0b6599a7df349d0575d\": container with ID starting with 739e6580757c5e540a290dbd54c31797b2ccce6cc67cc0b6599a7df349d0575d not found: ID does not exist" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.871864 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.872578 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="d990da31-f5ca-48c3-b4bf-981e1f029e05" containerName="aodh-api" containerID="cri-o://823460c84a3e63bc856825be9f6026c3db48ca0fa74c728e0ee30ae237b1baf0" gracePeriod=30 Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.872677 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="d990da31-f5ca-48c3-b4bf-981e1f029e05" containerName="aodh-listener" containerID="cri-o://e49adc275c2a434518c049caf2809ce53c85654c83d79bc592269ad420b80c87" gracePeriod=30 Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.872595 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="d990da31-f5ca-48c3-b4bf-981e1f029e05" containerName="aodh-notifier" containerID="cri-o://7e0c0a5e5dae4d2e07a4ee911d0b1c3ad6fd5d02319113899e823f63942383f5" gracePeriod=30 Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.872620 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="d990da31-f5ca-48c3-b4bf-981e1f029e05" containerName="aodh-evaluator" containerID="cri-o://8f741ac56953fe2e098345e3a30dc497be7aebc1a3469b85834a0976304e6477" gracePeriod=30 Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.899889 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.920151 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.935694 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 19 21:26:51 crc kubenswrapper[4886]: E0219 21:26:51.936181 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="638a08ec-2f97-4b36-919f-9346af224a16" containerName="setup-container" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.936196 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="638a08ec-2f97-4b36-919f-9346af224a16" containerName="setup-container" Feb 19 21:26:51 crc kubenswrapper[4886]: E0219 21:26:51.936208 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="638a08ec-2f97-4b36-919f-9346af224a16" containerName="rabbitmq" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.936214 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="638a08ec-2f97-4b36-919f-9346af224a16" containerName="rabbitmq" Feb 19 21:26:51 crc kubenswrapper[4886]: E0219 21:26:51.936225 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3c85441-b11d-4dc2-8964-872b7934ef4c" containerName="aodh-db-sync" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.936233 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3c85441-b11d-4dc2-8964-872b7934ef4c" containerName="aodh-db-sync" Feb 19 21:26:51 crc kubenswrapper[4886]: E0219 21:26:51.936254 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5304d1d-93a6-42a7-9e2f-e86e83a8e699" containerName="heat-engine" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.936276 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5304d1d-93a6-42a7-9e2f-e86e83a8e699" containerName="heat-engine" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.936480 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="638a08ec-2f97-4b36-919f-9346af224a16" containerName="rabbitmq" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.936519 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5304d1d-93a6-42a7-9e2f-e86e83a8e699" containerName="heat-engine" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.936532 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3c85441-b11d-4dc2-8964-872b7934ef4c" containerName="aodh-db-sync" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.937752 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 19 21:26:51 crc kubenswrapper[4886]: I0219 21:26:51.952118 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.097553 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c85970ce-fb0c-432d-9c06-9157d09eb182-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.097610 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c85970ce-fb0c-432d-9c06-9157d09eb182-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.097628 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c85970ce-fb0c-432d-9c06-9157d09eb182-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.097653 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c85970ce-fb0c-432d-9c06-9157d09eb182-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.097709 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c85970ce-fb0c-432d-9c06-9157d09eb182-config-data\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.097726 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c85970ce-fb0c-432d-9c06-9157d09eb182-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.097769 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a1937355-1b88-44b3-a3dc-d50452e62fb2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a1937355-1b88-44b3-a3dc-d50452e62fb2\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.097793 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c85970ce-fb0c-432d-9c06-9157d09eb182-pod-info\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.097897 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c85970ce-fb0c-432d-9c06-9157d09eb182-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.097961 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c85970ce-fb0c-432d-9c06-9157d09eb182-server-conf\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.097992 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqhr7\" (UniqueName: \"kubernetes.io/projected/c85970ce-fb0c-432d-9c06-9157d09eb182-kube-api-access-zqhr7\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.200065 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c85970ce-fb0c-432d-9c06-9157d09eb182-server-conf\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.201442 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c85970ce-fb0c-432d-9c06-9157d09eb182-server-conf\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.201989 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqhr7\" (UniqueName: \"kubernetes.io/projected/c85970ce-fb0c-432d-9c06-9157d09eb182-kube-api-access-zqhr7\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.202190 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c85970ce-fb0c-432d-9c06-9157d09eb182-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.203090 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c85970ce-fb0c-432d-9c06-9157d09eb182-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.203132 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c85970ce-fb0c-432d-9c06-9157d09eb182-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.203166 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c85970ce-fb0c-432d-9c06-9157d09eb182-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.203341 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c85970ce-fb0c-432d-9c06-9157d09eb182-config-data\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.203372 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c85970ce-fb0c-432d-9c06-9157d09eb182-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.203417 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a1937355-1b88-44b3-a3dc-d50452e62fb2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a1937355-1b88-44b3-a3dc-d50452e62fb2\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.203447 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c85970ce-fb0c-432d-9c06-9157d09eb182-pod-info\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.203568 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c85970ce-fb0c-432d-9c06-9157d09eb182-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.204184 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c85970ce-fb0c-432d-9c06-9157d09eb182-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.204600 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c85970ce-fb0c-432d-9c06-9157d09eb182-config-data\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.204972 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c85970ce-fb0c-432d-9c06-9157d09eb182-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.205335 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c85970ce-fb0c-432d-9c06-9157d09eb182-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.212034 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c85970ce-fb0c-432d-9c06-9157d09eb182-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.212064 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c85970ce-fb0c-432d-9c06-9157d09eb182-pod-info\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.212676 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c85970ce-fb0c-432d-9c06-9157d09eb182-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.213136 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c85970ce-fb0c-432d-9c06-9157d09eb182-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.216076 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.216102 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a1937355-1b88-44b3-a3dc-d50452e62fb2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a1937355-1b88-44b3-a3dc-d50452e62fb2\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f729d16b313da05fd19265a2c348290c58a3098859ab4c087c185b1545dd7ea/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.225142 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqhr7\" (UniqueName: \"kubernetes.io/projected/c85970ce-fb0c-432d-9c06-9157d09eb182-kube-api-access-zqhr7\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.307402 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a1937355-1b88-44b3-a3dc-d50452e62fb2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a1937355-1b88-44b3-a3dc-d50452e62fb2\") pod \"rabbitmq-server-1\" (UID: \"c85970ce-fb0c-432d-9c06-9157d09eb182\") " pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.328695 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.607085 4886 generic.go:334] "Generic (PLEG): container finished" podID="d990da31-f5ca-48c3-b4bf-981e1f029e05" containerID="823460c84a3e63bc856825be9f6026c3db48ca0fa74c728e0ee30ae237b1baf0" exitCode=0 Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.627434 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="638a08ec-2f97-4b36-919f-9346af224a16" path="/var/lib/kubelet/pods/638a08ec-2f97-4b36-919f-9346af224a16/volumes" Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.628779 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d990da31-f5ca-48c3-b4bf-981e1f029e05","Type":"ContainerDied","Data":"823460c84a3e63bc856825be9f6026c3db48ca0fa74c728e0ee30ae237b1baf0"} Feb 19 21:26:52 crc kubenswrapper[4886]: I0219 21:26:52.942092 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 19 21:26:53 crc kubenswrapper[4886]: I0219 21:26:53.623427 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"c85970ce-fb0c-432d-9c06-9157d09eb182","Type":"ContainerStarted","Data":"299f90de884af7d2de92364d4901bdde76f075bd5b59af0de6a0bd29efedd74c"} Feb 19 21:26:53 crc kubenswrapper[4886]: I0219 21:26:53.629098 4886 generic.go:334] "Generic (PLEG): container finished" podID="d990da31-f5ca-48c3-b4bf-981e1f029e05" containerID="8f741ac56953fe2e098345e3a30dc497be7aebc1a3469b85834a0976304e6477" exitCode=0 Feb 19 21:26:53 crc kubenswrapper[4886]: I0219 21:26:53.629148 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d990da31-f5ca-48c3-b4bf-981e1f029e05","Type":"ContainerDied","Data":"8f741ac56953fe2e098345e3a30dc497be7aebc1a3469b85834a0976304e6477"} Feb 19 21:26:55 crc kubenswrapper[4886]: I0219 21:26:55.650949 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"c85970ce-fb0c-432d-9c06-9157d09eb182","Type":"ContainerStarted","Data":"2001fc3f4999ad3a3e3a07599438f4b14df9ecfc2958966cf0c8723e57a57754"} Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.439714 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.515690 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-combined-ca-bundle\") pod \"d990da31-f5ca-48c3-b4bf-981e1f029e05\" (UID: \"d990da31-f5ca-48c3-b4bf-981e1f029e05\") " Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.515765 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-config-data\") pod \"d990da31-f5ca-48c3-b4bf-981e1f029e05\" (UID: \"d990da31-f5ca-48c3-b4bf-981e1f029e05\") " Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.515903 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-internal-tls-certs\") pod \"d990da31-f5ca-48c3-b4bf-981e1f029e05\" (UID: \"d990da31-f5ca-48c3-b4bf-981e1f029e05\") " Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.515955 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-public-tls-certs\") pod \"d990da31-f5ca-48c3-b4bf-981e1f029e05\" (UID: \"d990da31-f5ca-48c3-b4bf-981e1f029e05\") " Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.515980 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-scripts\") pod \"d990da31-f5ca-48c3-b4bf-981e1f029e05\" (UID: \"d990da31-f5ca-48c3-b4bf-981e1f029e05\") " Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.516090 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh27k\" (UniqueName: \"kubernetes.io/projected/d990da31-f5ca-48c3-b4bf-981e1f029e05-kube-api-access-sh27k\") pod \"d990da31-f5ca-48c3-b4bf-981e1f029e05\" (UID: \"d990da31-f5ca-48c3-b4bf-981e1f029e05\") " Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.522674 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-scripts" (OuterVolumeSpecName: "scripts") pod "d990da31-f5ca-48c3-b4bf-981e1f029e05" (UID: "d990da31-f5ca-48c3-b4bf-981e1f029e05"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.543913 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d990da31-f5ca-48c3-b4bf-981e1f029e05-kube-api-access-sh27k" (OuterVolumeSpecName: "kube-api-access-sh27k") pod "d990da31-f5ca-48c3-b4bf-981e1f029e05" (UID: "d990da31-f5ca-48c3-b4bf-981e1f029e05"). InnerVolumeSpecName "kube-api-access-sh27k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.620004 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.620203 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh27k\" (UniqueName: \"kubernetes.io/projected/d990da31-f5ca-48c3-b4bf-981e1f029e05-kube-api-access-sh27k\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.650403 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d990da31-f5ca-48c3-b4bf-981e1f029e05" (UID: "d990da31-f5ca-48c3-b4bf-981e1f029e05"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.674527 4886 generic.go:334] "Generic (PLEG): container finished" podID="d990da31-f5ca-48c3-b4bf-981e1f029e05" containerID="e49adc275c2a434518c049caf2809ce53c85654c83d79bc592269ad420b80c87" exitCode=0 Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.675530 4886 generic.go:334] "Generic (PLEG): container finished" podID="d990da31-f5ca-48c3-b4bf-981e1f029e05" containerID="7e0c0a5e5dae4d2e07a4ee911d0b1c3ad6fd5d02319113899e823f63942383f5" exitCode=0 Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.674597 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.674618 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d990da31-f5ca-48c3-b4bf-981e1f029e05","Type":"ContainerDied","Data":"e49adc275c2a434518c049caf2809ce53c85654c83d79bc592269ad420b80c87"} Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.675681 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d990da31-f5ca-48c3-b4bf-981e1f029e05","Type":"ContainerDied","Data":"7e0c0a5e5dae4d2e07a4ee911d0b1c3ad6fd5d02319113899e823f63942383f5"} Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.675695 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d990da31-f5ca-48c3-b4bf-981e1f029e05","Type":"ContainerDied","Data":"b0fe2e051951d37bc434e03eb0f3cb23f98df56898182222aa2991ac33ea0bc8"} Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.675969 4886 scope.go:117] "RemoveContainer" containerID="e49adc275c2a434518c049caf2809ce53c85654c83d79bc592269ad420b80c87" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.697400 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d990da31-f5ca-48c3-b4bf-981e1f029e05" (UID: "d990da31-f5ca-48c3-b4bf-981e1f029e05"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.715440 4886 scope.go:117] "RemoveContainer" containerID="7e0c0a5e5dae4d2e07a4ee911d0b1c3ad6fd5d02319113899e823f63942383f5" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.723094 4886 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.723128 4886 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.729782 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-config-data" (OuterVolumeSpecName: "config-data") pod "d990da31-f5ca-48c3-b4bf-981e1f029e05" (UID: "d990da31-f5ca-48c3-b4bf-981e1f029e05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.776441 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d990da31-f5ca-48c3-b4bf-981e1f029e05" (UID: "d990da31-f5ca-48c3-b4bf-981e1f029e05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.790707 4886 scope.go:117] "RemoveContainer" containerID="8f741ac56953fe2e098345e3a30dc497be7aebc1a3469b85834a0976304e6477" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.829995 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.830031 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d990da31-f5ca-48c3-b4bf-981e1f029e05-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.882354 4886 scope.go:117] "RemoveContainer" containerID="823460c84a3e63bc856825be9f6026c3db48ca0fa74c728e0ee30ae237b1baf0" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.910633 4886 scope.go:117] "RemoveContainer" containerID="e49adc275c2a434518c049caf2809ce53c85654c83d79bc592269ad420b80c87" Feb 19 21:26:56 crc kubenswrapper[4886]: E0219 21:26:56.911107 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e49adc275c2a434518c049caf2809ce53c85654c83d79bc592269ad420b80c87\": container with ID starting with e49adc275c2a434518c049caf2809ce53c85654c83d79bc592269ad420b80c87 not found: ID does not exist" containerID="e49adc275c2a434518c049caf2809ce53c85654c83d79bc592269ad420b80c87" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.911207 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e49adc275c2a434518c049caf2809ce53c85654c83d79bc592269ad420b80c87"} err="failed to get container status \"e49adc275c2a434518c049caf2809ce53c85654c83d79bc592269ad420b80c87\": rpc error: code = NotFound desc = could not find container \"e49adc275c2a434518c049caf2809ce53c85654c83d79bc592269ad420b80c87\": container with ID starting with e49adc275c2a434518c049caf2809ce53c85654c83d79bc592269ad420b80c87 not found: ID does not exist" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.911305 4886 scope.go:117] "RemoveContainer" containerID="7e0c0a5e5dae4d2e07a4ee911d0b1c3ad6fd5d02319113899e823f63942383f5" Feb 19 21:26:56 crc kubenswrapper[4886]: E0219 21:26:56.911689 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e0c0a5e5dae4d2e07a4ee911d0b1c3ad6fd5d02319113899e823f63942383f5\": container with ID starting with 7e0c0a5e5dae4d2e07a4ee911d0b1c3ad6fd5d02319113899e823f63942383f5 not found: ID does not exist" containerID="7e0c0a5e5dae4d2e07a4ee911d0b1c3ad6fd5d02319113899e823f63942383f5" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.911725 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e0c0a5e5dae4d2e07a4ee911d0b1c3ad6fd5d02319113899e823f63942383f5"} err="failed to get container status \"7e0c0a5e5dae4d2e07a4ee911d0b1c3ad6fd5d02319113899e823f63942383f5\": rpc error: code = NotFound desc = could not find container \"7e0c0a5e5dae4d2e07a4ee911d0b1c3ad6fd5d02319113899e823f63942383f5\": container with ID starting with 7e0c0a5e5dae4d2e07a4ee911d0b1c3ad6fd5d02319113899e823f63942383f5 not found: ID does not exist" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.911767 4886 scope.go:117] "RemoveContainer" containerID="8f741ac56953fe2e098345e3a30dc497be7aebc1a3469b85834a0976304e6477" Feb 19 21:26:56 crc kubenswrapper[4886]: E0219 21:26:56.912033 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f741ac56953fe2e098345e3a30dc497be7aebc1a3469b85834a0976304e6477\": container with ID starting with 8f741ac56953fe2e098345e3a30dc497be7aebc1a3469b85834a0976304e6477 not found: ID does not exist" containerID="8f741ac56953fe2e098345e3a30dc497be7aebc1a3469b85834a0976304e6477" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.912134 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f741ac56953fe2e098345e3a30dc497be7aebc1a3469b85834a0976304e6477"} err="failed to get container status \"8f741ac56953fe2e098345e3a30dc497be7aebc1a3469b85834a0976304e6477\": rpc error: code = NotFound desc = could not find container \"8f741ac56953fe2e098345e3a30dc497be7aebc1a3469b85834a0976304e6477\": container with ID starting with 8f741ac56953fe2e098345e3a30dc497be7aebc1a3469b85834a0976304e6477 not found: ID does not exist" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.912214 4886 scope.go:117] "RemoveContainer" containerID="823460c84a3e63bc856825be9f6026c3db48ca0fa74c728e0ee30ae237b1baf0" Feb 19 21:26:56 crc kubenswrapper[4886]: E0219 21:26:56.912516 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"823460c84a3e63bc856825be9f6026c3db48ca0fa74c728e0ee30ae237b1baf0\": container with ID starting with 823460c84a3e63bc856825be9f6026c3db48ca0fa74c728e0ee30ae237b1baf0 not found: ID does not exist" containerID="823460c84a3e63bc856825be9f6026c3db48ca0fa74c728e0ee30ae237b1baf0" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.912589 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"823460c84a3e63bc856825be9f6026c3db48ca0fa74c728e0ee30ae237b1baf0"} err="failed to get container status \"823460c84a3e63bc856825be9f6026c3db48ca0fa74c728e0ee30ae237b1baf0\": rpc error: code = NotFound desc = could not find container \"823460c84a3e63bc856825be9f6026c3db48ca0fa74c728e0ee30ae237b1baf0\": container with ID starting with 823460c84a3e63bc856825be9f6026c3db48ca0fa74c728e0ee30ae237b1baf0 not found: ID does not exist" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.912672 4886 scope.go:117] "RemoveContainer" containerID="e49adc275c2a434518c049caf2809ce53c85654c83d79bc592269ad420b80c87" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.913016 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e49adc275c2a434518c049caf2809ce53c85654c83d79bc592269ad420b80c87"} err="failed to get container status \"e49adc275c2a434518c049caf2809ce53c85654c83d79bc592269ad420b80c87\": rpc error: code = NotFound desc = could not find container \"e49adc275c2a434518c049caf2809ce53c85654c83d79bc592269ad420b80c87\": container with ID starting with e49adc275c2a434518c049caf2809ce53c85654c83d79bc592269ad420b80c87 not found: ID does not exist" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.913062 4886 scope.go:117] "RemoveContainer" containerID="7e0c0a5e5dae4d2e07a4ee911d0b1c3ad6fd5d02319113899e823f63942383f5" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.913447 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e0c0a5e5dae4d2e07a4ee911d0b1c3ad6fd5d02319113899e823f63942383f5"} err="failed to get container status \"7e0c0a5e5dae4d2e07a4ee911d0b1c3ad6fd5d02319113899e823f63942383f5\": rpc error: code = NotFound desc = could not find container \"7e0c0a5e5dae4d2e07a4ee911d0b1c3ad6fd5d02319113899e823f63942383f5\": container with ID starting with 7e0c0a5e5dae4d2e07a4ee911d0b1c3ad6fd5d02319113899e823f63942383f5 not found: ID does not exist" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.913497 4886 scope.go:117] "RemoveContainer" containerID="8f741ac56953fe2e098345e3a30dc497be7aebc1a3469b85834a0976304e6477" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.913719 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f741ac56953fe2e098345e3a30dc497be7aebc1a3469b85834a0976304e6477"} err="failed to get container status \"8f741ac56953fe2e098345e3a30dc497be7aebc1a3469b85834a0976304e6477\": rpc error: code = NotFound desc = could not find container \"8f741ac56953fe2e098345e3a30dc497be7aebc1a3469b85834a0976304e6477\": container with ID starting with 8f741ac56953fe2e098345e3a30dc497be7aebc1a3469b85834a0976304e6477 not found: ID does not exist" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.913798 4886 scope.go:117] "RemoveContainer" containerID="823460c84a3e63bc856825be9f6026c3db48ca0fa74c728e0ee30ae237b1baf0" Feb 19 21:26:56 crc kubenswrapper[4886]: I0219 21:26:56.914033 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"823460c84a3e63bc856825be9f6026c3db48ca0fa74c728e0ee30ae237b1baf0"} err="failed to get container status \"823460c84a3e63bc856825be9f6026c3db48ca0fa74c728e0ee30ae237b1baf0\": rpc error: code = NotFound desc = could not find container \"823460c84a3e63bc856825be9f6026c3db48ca0fa74c728e0ee30ae237b1baf0\": container with ID starting with 823460c84a3e63bc856825be9f6026c3db48ca0fa74c728e0ee30ae237b1baf0 not found: ID does not exist" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.029517 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.048932 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.070653 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 19 21:26:57 crc kubenswrapper[4886]: E0219 21:26:57.071215 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d990da31-f5ca-48c3-b4bf-981e1f029e05" containerName="aodh-listener" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.071235 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d990da31-f5ca-48c3-b4bf-981e1f029e05" containerName="aodh-listener" Feb 19 21:26:57 crc kubenswrapper[4886]: E0219 21:26:57.071246 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d990da31-f5ca-48c3-b4bf-981e1f029e05" containerName="aodh-notifier" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.071252 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d990da31-f5ca-48c3-b4bf-981e1f029e05" containerName="aodh-notifier" Feb 19 21:26:57 crc kubenswrapper[4886]: E0219 21:26:57.071291 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d990da31-f5ca-48c3-b4bf-981e1f029e05" containerName="aodh-api" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.071297 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d990da31-f5ca-48c3-b4bf-981e1f029e05" containerName="aodh-api" Feb 19 21:26:57 crc kubenswrapper[4886]: E0219 21:26:57.071308 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d990da31-f5ca-48c3-b4bf-981e1f029e05" containerName="aodh-evaluator" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.071314 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d990da31-f5ca-48c3-b4bf-981e1f029e05" containerName="aodh-evaluator" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.071554 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d990da31-f5ca-48c3-b4bf-981e1f029e05" containerName="aodh-evaluator" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.071571 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d990da31-f5ca-48c3-b4bf-981e1f029e05" containerName="aodh-listener" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.071579 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d990da31-f5ca-48c3-b4bf-981e1f029e05" containerName="aodh-notifier" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.071610 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d990da31-f5ca-48c3-b4bf-981e1f029e05" containerName="aodh-api" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.074209 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.076809 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-85ctw" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.087768 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.087975 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.088554 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.088806 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.097766 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.142762 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbd75038-587e-4ea7-89a9-c3b02c59ca18-scripts\") pod \"aodh-0\" (UID: \"dbd75038-587e-4ea7-89a9-c3b02c59ca18\") " pod="openstack/aodh-0" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.142856 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbd75038-587e-4ea7-89a9-c3b02c59ca18-internal-tls-certs\") pod \"aodh-0\" (UID: \"dbd75038-587e-4ea7-89a9-c3b02c59ca18\") " pod="openstack/aodh-0" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.142976 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbd75038-587e-4ea7-89a9-c3b02c59ca18-public-tls-certs\") pod \"aodh-0\" (UID: \"dbd75038-587e-4ea7-89a9-c3b02c59ca18\") " pod="openstack/aodh-0" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.143024 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbd75038-587e-4ea7-89a9-c3b02c59ca18-combined-ca-bundle\") pod \"aodh-0\" (UID: \"dbd75038-587e-4ea7-89a9-c3b02c59ca18\") " pod="openstack/aodh-0" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.143356 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdb8v\" (UniqueName: \"kubernetes.io/projected/dbd75038-587e-4ea7-89a9-c3b02c59ca18-kube-api-access-sdb8v\") pod \"aodh-0\" (UID: \"dbd75038-587e-4ea7-89a9-c3b02c59ca18\") " pod="openstack/aodh-0" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.143703 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbd75038-587e-4ea7-89a9-c3b02c59ca18-config-data\") pod \"aodh-0\" (UID: \"dbd75038-587e-4ea7-89a9-c3b02c59ca18\") " pod="openstack/aodh-0" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.245571 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbd75038-587e-4ea7-89a9-c3b02c59ca18-public-tls-certs\") pod \"aodh-0\" (UID: \"dbd75038-587e-4ea7-89a9-c3b02c59ca18\") " pod="openstack/aodh-0" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.246406 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbd75038-587e-4ea7-89a9-c3b02c59ca18-combined-ca-bundle\") pod \"aodh-0\" (UID: \"dbd75038-587e-4ea7-89a9-c3b02c59ca18\") " pod="openstack/aodh-0" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.246563 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdb8v\" (UniqueName: \"kubernetes.io/projected/dbd75038-587e-4ea7-89a9-c3b02c59ca18-kube-api-access-sdb8v\") pod \"aodh-0\" (UID: \"dbd75038-587e-4ea7-89a9-c3b02c59ca18\") " pod="openstack/aodh-0" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.246721 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbd75038-587e-4ea7-89a9-c3b02c59ca18-config-data\") pod \"aodh-0\" (UID: \"dbd75038-587e-4ea7-89a9-c3b02c59ca18\") " pod="openstack/aodh-0" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.246834 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbd75038-587e-4ea7-89a9-c3b02c59ca18-scripts\") pod \"aodh-0\" (UID: \"dbd75038-587e-4ea7-89a9-c3b02c59ca18\") " pod="openstack/aodh-0" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.246916 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbd75038-587e-4ea7-89a9-c3b02c59ca18-internal-tls-certs\") pod \"aodh-0\" (UID: \"dbd75038-587e-4ea7-89a9-c3b02c59ca18\") " pod="openstack/aodh-0" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.252254 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbd75038-587e-4ea7-89a9-c3b02c59ca18-config-data\") pod \"aodh-0\" (UID: \"dbd75038-587e-4ea7-89a9-c3b02c59ca18\") " pod="openstack/aodh-0" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.254820 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbd75038-587e-4ea7-89a9-c3b02c59ca18-combined-ca-bundle\") pod \"aodh-0\" (UID: \"dbd75038-587e-4ea7-89a9-c3b02c59ca18\") " pod="openstack/aodh-0" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.254865 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbd75038-587e-4ea7-89a9-c3b02c59ca18-internal-tls-certs\") pod \"aodh-0\" (UID: \"dbd75038-587e-4ea7-89a9-c3b02c59ca18\") " pod="openstack/aodh-0" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.255313 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbd75038-587e-4ea7-89a9-c3b02c59ca18-public-tls-certs\") pod \"aodh-0\" (UID: \"dbd75038-587e-4ea7-89a9-c3b02c59ca18\") " pod="openstack/aodh-0" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.264759 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbd75038-587e-4ea7-89a9-c3b02c59ca18-scripts\") pod \"aodh-0\" (UID: \"dbd75038-587e-4ea7-89a9-c3b02c59ca18\") " pod="openstack/aodh-0" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.269692 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdb8v\" (UniqueName: \"kubernetes.io/projected/dbd75038-587e-4ea7-89a9-c3b02c59ca18-kube-api-access-sdb8v\") pod \"aodh-0\" (UID: \"dbd75038-587e-4ea7-89a9-c3b02c59ca18\") " pod="openstack/aodh-0" Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.404843 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 21:26:57 crc kubenswrapper[4886]: W0219 21:26:57.923506 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbd75038_587e_4ea7_89a9_c3b02c59ca18.slice/crio-23160c1a46b5ce8389584410fe875327885c7b17aadbafa7d34d7afc0912cde2 WatchSource:0}: Error finding container 23160c1a46b5ce8389584410fe875327885c7b17aadbafa7d34d7afc0912cde2: Status 404 returned error can't find the container with id 23160c1a46b5ce8389584410fe875327885c7b17aadbafa7d34d7afc0912cde2 Feb 19 21:26:57 crc kubenswrapper[4886]: I0219 21:26:57.930516 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 19 21:26:58 crc kubenswrapper[4886]: I0219 21:26:58.618042 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d990da31-f5ca-48c3-b4bf-981e1f029e05" path="/var/lib/kubelet/pods/d990da31-f5ca-48c3-b4bf-981e1f029e05/volumes" Feb 19 21:26:58 crc kubenswrapper[4886]: I0219 21:26:58.702721 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"dbd75038-587e-4ea7-89a9-c3b02c59ca18","Type":"ContainerStarted","Data":"f4681923235512cee1c3fa3ddf1e93092e35a24f563a4eed273ca435f62cdaac"} Feb 19 21:26:58 crc kubenswrapper[4886]: I0219 21:26:58.702768 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"dbd75038-587e-4ea7-89a9-c3b02c59ca18","Type":"ContainerStarted","Data":"23160c1a46b5ce8389584410fe875327885c7b17aadbafa7d34d7afc0912cde2"} Feb 19 21:26:59 crc kubenswrapper[4886]: I0219 21:26:59.721226 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"dbd75038-587e-4ea7-89a9-c3b02c59ca18","Type":"ContainerStarted","Data":"cc466c096afee9dd5bbaee2bfc06ddba0b1cead2e25135330068192d55c5c011"} Feb 19 21:27:01 crc kubenswrapper[4886]: I0219 21:27:01.600609 4886 scope.go:117] "RemoveContainer" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" Feb 19 21:27:01 crc kubenswrapper[4886]: E0219 21:27:01.601243 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:27:01 crc kubenswrapper[4886]: I0219 21:27:01.749462 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"dbd75038-587e-4ea7-89a9-c3b02c59ca18","Type":"ContainerStarted","Data":"6d0797fe38b2a02d320f5164c8090d86d9513a5b423c9a0d0f58a549ab8b8659"} Feb 19 21:27:02 crc kubenswrapper[4886]: I0219 21:27:02.762840 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"dbd75038-587e-4ea7-89a9-c3b02c59ca18","Type":"ContainerStarted","Data":"935f8e47aeec64b79cc17e7c81d0de1871401301a94e43542adbd5d1f90e271b"} Feb 19 21:27:02 crc kubenswrapper[4886]: I0219 21:27:02.799391 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.613118866 podStartE2EDuration="5.799373394s" podCreationTimestamp="2026-02-19 21:26:57 +0000 UTC" firstStartedPulling="2026-02-19 21:26:57.925801914 +0000 UTC m=+1648.553644964" lastFinishedPulling="2026-02-19 21:27:02.112056442 +0000 UTC m=+1652.739899492" observedRunningTime="2026-02-19 21:27:02.796728299 +0000 UTC m=+1653.424571349" watchObservedRunningTime="2026-02-19 21:27:02.799373394 +0000 UTC m=+1653.427216444" Feb 19 21:27:12 crc kubenswrapper[4886]: I0219 21:27:12.601535 4886 scope.go:117] "RemoveContainer" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" Feb 19 21:27:12 crc kubenswrapper[4886]: E0219 21:27:12.602443 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:27:26 crc kubenswrapper[4886]: I0219 21:27:26.602315 4886 scope.go:117] "RemoveContainer" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" Feb 19 21:27:26 crc kubenswrapper[4886]: E0219 21:27:26.603337 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:27:28 crc kubenswrapper[4886]: I0219 21:27:28.087236 4886 generic.go:334] "Generic (PLEG): container finished" podID="c85970ce-fb0c-432d-9c06-9157d09eb182" containerID="2001fc3f4999ad3a3e3a07599438f4b14df9ecfc2958966cf0c8723e57a57754" exitCode=0 Feb 19 21:27:28 crc kubenswrapper[4886]: I0219 21:27:28.087304 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"c85970ce-fb0c-432d-9c06-9157d09eb182","Type":"ContainerDied","Data":"2001fc3f4999ad3a3e3a07599438f4b14df9ecfc2958966cf0c8723e57a57754"} Feb 19 21:27:29 crc kubenswrapper[4886]: I0219 21:27:29.108605 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"c85970ce-fb0c-432d-9c06-9157d09eb182","Type":"ContainerStarted","Data":"d0ba785703e9c2b2adde220fd2314172409f1a9b97693c17cf3fcedecafc9c80"} Feb 19 21:27:29 crc kubenswrapper[4886]: I0219 21:27:29.110029 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 19 21:27:29 crc kubenswrapper[4886]: I0219 21:27:29.153210 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=38.153179205 podStartE2EDuration="38.153179205s" podCreationTimestamp="2026-02-19 21:26:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:27:29.133983194 +0000 UTC m=+1679.761826244" watchObservedRunningTime="2026-02-19 21:27:29.153179205 +0000 UTC m=+1679.781022295" Feb 19 21:27:41 crc kubenswrapper[4886]: I0219 21:27:41.601565 4886 scope.go:117] "RemoveContainer" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" Feb 19 21:27:41 crc kubenswrapper[4886]: E0219 21:27:41.602761 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:27:42 crc kubenswrapper[4886]: I0219 21:27:42.331469 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 19 21:27:42 crc kubenswrapper[4886]: I0219 21:27:42.448370 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 21:27:46 crc kubenswrapper[4886]: I0219 21:27:46.507733 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f1ec4082-af5d-46ce-a7ca-88091e668a22" containerName="rabbitmq" containerID="cri-o://6cf3c17d8e6edaf248c5e7a9f0a46cea2317a9087bde8f5dbc744debfbf4c7f7" gracePeriod=604796 Feb 19 21:27:47 crc kubenswrapper[4886]: I0219 21:27:47.925670 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f1ec4082-af5d-46ce-a7ca-88091e668a22" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 19 21:27:50 crc kubenswrapper[4886]: I0219 21:27:50.284488 4886 scope.go:117] "RemoveContainer" containerID="edcf4d93db363fcecb58646a89eb8b9330d43628486ebc3c4241236bee79acf6" Feb 19 21:27:50 crc kubenswrapper[4886]: I0219 21:27:50.326590 4886 scope.go:117] "RemoveContainer" containerID="1042a789ee012ef695b8251741b4bdb333e843ca37f2160fd731e71ae67f2ac2" Feb 19 21:27:50 crc kubenswrapper[4886]: I0219 21:27:50.365500 4886 scope.go:117] "RemoveContainer" containerID="649464e25b83097625c83746e1b08fffeb5dec3a70bb1c7109d309483a942add" Feb 19 21:27:50 crc kubenswrapper[4886]: I0219 21:27:50.455193 4886 scope.go:117] "RemoveContainer" containerID="05c72c6af8cbe0432e83f5b69fe072ff68c2eef963b711a816723a571d1d3343" Feb 19 21:27:50 crc kubenswrapper[4886]: I0219 21:27:50.479665 4886 scope.go:117] "RemoveContainer" containerID="e20466706196d5e4828283593ca2ccb2364b08ff2aa2b105b1d7546978ba93c7" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.366571 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.521378 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f1ec4082-af5d-46ce-a7ca-88091e668a22-plugins-conf\") pod \"f1ec4082-af5d-46ce-a7ca-88091e668a22\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.521523 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f1ec4082-af5d-46ce-a7ca-88091e668a22-rabbitmq-confd\") pod \"f1ec4082-af5d-46ce-a7ca-88091e668a22\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.521547 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f1ec4082-af5d-46ce-a7ca-88091e668a22-rabbitmq-tls\") pod \"f1ec4082-af5d-46ce-a7ca-88091e668a22\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.521582 4886 generic.go:334] "Generic (PLEG): container finished" podID="f1ec4082-af5d-46ce-a7ca-88091e668a22" containerID="6cf3c17d8e6edaf248c5e7a9f0a46cea2317a9087bde8f5dbc744debfbf4c7f7" exitCode=0 Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.521613 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f1ec4082-af5d-46ce-a7ca-88091e668a22-pod-info\") pod \"f1ec4082-af5d-46ce-a7ca-88091e668a22\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.521620 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f1ec4082-af5d-46ce-a7ca-88091e668a22","Type":"ContainerDied","Data":"6cf3c17d8e6edaf248c5e7a9f0a46cea2317a9087bde8f5dbc744debfbf4c7f7"} Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.521646 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f1ec4082-af5d-46ce-a7ca-88091e668a22","Type":"ContainerDied","Data":"612dcfd532ca33e13d20f1d94ee06ffb69fed04dcf7cfc615b6406193a5f5c22"} Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.521663 4886 scope.go:117] "RemoveContainer" containerID="6cf3c17d8e6edaf248c5e7a9f0a46cea2317a9087bde8f5dbc744debfbf4c7f7" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.521670 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1ec4082-af5d-46ce-a7ca-88091e668a22-config-data\") pod \"f1ec4082-af5d-46ce-a7ca-88091e668a22\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.521689 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.521735 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f1ec4082-af5d-46ce-a7ca-88091e668a22-erlang-cookie-secret\") pod \"f1ec4082-af5d-46ce-a7ca-88091e668a22\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.521754 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1ec4082-af5d-46ce-a7ca-88091e668a22-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f1ec4082-af5d-46ce-a7ca-88091e668a22" (UID: "f1ec4082-af5d-46ce-a7ca-88091e668a22"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.521840 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f1ec4082-af5d-46ce-a7ca-88091e668a22-rabbitmq-plugins\") pod \"f1ec4082-af5d-46ce-a7ca-88091e668a22\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.521885 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f1ec4082-af5d-46ce-a7ca-88091e668a22-server-conf\") pod \"f1ec4082-af5d-46ce-a7ca-88091e668a22\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.522414 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1daff07d-3864-4736-a71a-3bbc999db29d\") pod \"f1ec4082-af5d-46ce-a7ca-88091e668a22\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.522459 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f1ec4082-af5d-46ce-a7ca-88091e668a22-rabbitmq-erlang-cookie\") pod \"f1ec4082-af5d-46ce-a7ca-88091e668a22\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.522485 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hftz2\" (UniqueName: \"kubernetes.io/projected/f1ec4082-af5d-46ce-a7ca-88091e668a22-kube-api-access-hftz2\") pod \"f1ec4082-af5d-46ce-a7ca-88091e668a22\" (UID: \"f1ec4082-af5d-46ce-a7ca-88091e668a22\") " Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.522976 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1ec4082-af5d-46ce-a7ca-88091e668a22-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f1ec4082-af5d-46ce-a7ca-88091e668a22" (UID: "f1ec4082-af5d-46ce-a7ca-88091e668a22"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.523518 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1ec4082-af5d-46ce-a7ca-88091e668a22-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f1ec4082-af5d-46ce-a7ca-88091e668a22" (UID: "f1ec4082-af5d-46ce-a7ca-88091e668a22"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.528065 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1ec4082-af5d-46ce-a7ca-88091e668a22-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f1ec4082-af5d-46ce-a7ca-88091e668a22" (UID: "f1ec4082-af5d-46ce-a7ca-88091e668a22"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.530619 4886 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f1ec4082-af5d-46ce-a7ca-88091e668a22-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.530661 4886 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f1ec4082-af5d-46ce-a7ca-88091e668a22-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.530682 4886 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f1ec4082-af5d-46ce-a7ca-88091e668a22-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.530694 4886 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f1ec4082-af5d-46ce-a7ca-88091e668a22-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.534491 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f1ec4082-af5d-46ce-a7ca-88091e668a22-pod-info" (OuterVolumeSpecName: "pod-info") pod "f1ec4082-af5d-46ce-a7ca-88091e668a22" (UID: "f1ec4082-af5d-46ce-a7ca-88091e668a22"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.534791 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ec4082-af5d-46ce-a7ca-88091e668a22-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f1ec4082-af5d-46ce-a7ca-88091e668a22" (UID: "f1ec4082-af5d-46ce-a7ca-88091e668a22"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.542854 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1ec4082-af5d-46ce-a7ca-88091e668a22-kube-api-access-hftz2" (OuterVolumeSpecName: "kube-api-access-hftz2") pod "f1ec4082-af5d-46ce-a7ca-88091e668a22" (UID: "f1ec4082-af5d-46ce-a7ca-88091e668a22"). InnerVolumeSpecName "kube-api-access-hftz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.568353 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1daff07d-3864-4736-a71a-3bbc999db29d" (OuterVolumeSpecName: "persistence") pod "f1ec4082-af5d-46ce-a7ca-88091e668a22" (UID: "f1ec4082-af5d-46ce-a7ca-88091e668a22"). InnerVolumeSpecName "pvc-1daff07d-3864-4736-a71a-3bbc999db29d". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.578810 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1ec4082-af5d-46ce-a7ca-88091e668a22-config-data" (OuterVolumeSpecName: "config-data") pod "f1ec4082-af5d-46ce-a7ca-88091e668a22" (UID: "f1ec4082-af5d-46ce-a7ca-88091e668a22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.599186 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1ec4082-af5d-46ce-a7ca-88091e668a22-server-conf" (OuterVolumeSpecName: "server-conf") pod "f1ec4082-af5d-46ce-a7ca-88091e668a22" (UID: "f1ec4082-af5d-46ce-a7ca-88091e668a22"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.633041 4886 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-1daff07d-3864-4736-a71a-3bbc999db29d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1daff07d-3864-4736-a71a-3bbc999db29d\") on node \"crc\" " Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.633078 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hftz2\" (UniqueName: \"kubernetes.io/projected/f1ec4082-af5d-46ce-a7ca-88091e668a22-kube-api-access-hftz2\") on node \"crc\" DevicePath \"\"" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.633089 4886 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f1ec4082-af5d-46ce-a7ca-88091e668a22-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.633097 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1ec4082-af5d-46ce-a7ca-88091e668a22-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.633106 4886 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f1ec4082-af5d-46ce-a7ca-88091e668a22-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.633115 4886 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f1ec4082-af5d-46ce-a7ca-88091e668a22-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.680714 4886 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.680858 4886 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-1daff07d-3864-4736-a71a-3bbc999db29d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1daff07d-3864-4736-a71a-3bbc999db29d") on node "crc" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.687958 4886 scope.go:117] "RemoveContainer" containerID="169100d07ea125113ae9e306e5b8e8dd80d7424de5a2e844aaf2cd8ac5d03566" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.717009 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1ec4082-af5d-46ce-a7ca-88091e668a22-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f1ec4082-af5d-46ce-a7ca-88091e668a22" (UID: "f1ec4082-af5d-46ce-a7ca-88091e668a22"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.725830 4886 scope.go:117] "RemoveContainer" containerID="6cf3c17d8e6edaf248c5e7a9f0a46cea2317a9087bde8f5dbc744debfbf4c7f7" Feb 19 21:27:53 crc kubenswrapper[4886]: E0219 21:27:53.727791 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cf3c17d8e6edaf248c5e7a9f0a46cea2317a9087bde8f5dbc744debfbf4c7f7\": container with ID starting with 6cf3c17d8e6edaf248c5e7a9f0a46cea2317a9087bde8f5dbc744debfbf4c7f7 not found: ID does not exist" containerID="6cf3c17d8e6edaf248c5e7a9f0a46cea2317a9087bde8f5dbc744debfbf4c7f7" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.727862 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cf3c17d8e6edaf248c5e7a9f0a46cea2317a9087bde8f5dbc744debfbf4c7f7"} err="failed to get container status \"6cf3c17d8e6edaf248c5e7a9f0a46cea2317a9087bde8f5dbc744debfbf4c7f7\": rpc error: code = NotFound desc = could not find container \"6cf3c17d8e6edaf248c5e7a9f0a46cea2317a9087bde8f5dbc744debfbf4c7f7\": container with ID starting with 6cf3c17d8e6edaf248c5e7a9f0a46cea2317a9087bde8f5dbc744debfbf4c7f7 not found: ID does not exist" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.727901 4886 scope.go:117] "RemoveContainer" containerID="169100d07ea125113ae9e306e5b8e8dd80d7424de5a2e844aaf2cd8ac5d03566" Feb 19 21:27:53 crc kubenswrapper[4886]: E0219 21:27:53.728302 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"169100d07ea125113ae9e306e5b8e8dd80d7424de5a2e844aaf2cd8ac5d03566\": container with ID starting with 169100d07ea125113ae9e306e5b8e8dd80d7424de5a2e844aaf2cd8ac5d03566 not found: ID does not exist" containerID="169100d07ea125113ae9e306e5b8e8dd80d7424de5a2e844aaf2cd8ac5d03566" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.728323 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"169100d07ea125113ae9e306e5b8e8dd80d7424de5a2e844aaf2cd8ac5d03566"} err="failed to get container status \"169100d07ea125113ae9e306e5b8e8dd80d7424de5a2e844aaf2cd8ac5d03566\": rpc error: code = NotFound desc = could not find container \"169100d07ea125113ae9e306e5b8e8dd80d7424de5a2e844aaf2cd8ac5d03566\": container with ID starting with 169100d07ea125113ae9e306e5b8e8dd80d7424de5a2e844aaf2cd8ac5d03566 not found: ID does not exist" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.736663 4886 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f1ec4082-af5d-46ce-a7ca-88091e668a22-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.736694 4886 reconciler_common.go:293] "Volume detached for volume \"pvc-1daff07d-3864-4736-a71a-3bbc999db29d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1daff07d-3864-4736-a71a-3bbc999db29d\") on node \"crc\" DevicePath \"\"" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.874815 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.892572 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.931963 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 21:27:53 crc kubenswrapper[4886]: E0219 21:27:53.932455 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1ec4082-af5d-46ce-a7ca-88091e668a22" containerName="rabbitmq" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.932475 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1ec4082-af5d-46ce-a7ca-88091e668a22" containerName="rabbitmq" Feb 19 21:27:53 crc kubenswrapper[4886]: E0219 21:27:53.932490 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1ec4082-af5d-46ce-a7ca-88091e668a22" containerName="setup-container" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.932497 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1ec4082-af5d-46ce-a7ca-88091e668a22" containerName="setup-container" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.932709 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1ec4082-af5d-46ce-a7ca-88091e668a22" containerName="rabbitmq" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.934369 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 21:27:53 crc kubenswrapper[4886]: I0219 21:27:53.946564 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.043190 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1daff07d-3864-4736-a71a-3bbc999db29d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1daff07d-3864-4736-a71a-3bbc999db29d\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.043279 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/867aa162-7de9-40d0-a840-3c2ff5acdb82-pod-info\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.043329 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k4sn\" (UniqueName: \"kubernetes.io/projected/867aa162-7de9-40d0-a840-3c2ff5acdb82-kube-api-access-7k4sn\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.043431 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/867aa162-7de9-40d0-a840-3c2ff5acdb82-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.043455 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/867aa162-7de9-40d0-a840-3c2ff5acdb82-server-conf\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.043486 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/867aa162-7de9-40d0-a840-3c2ff5acdb82-config-data\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.043803 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/867aa162-7de9-40d0-a840-3c2ff5acdb82-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.043931 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/867aa162-7de9-40d0-a840-3c2ff5acdb82-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.043970 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/867aa162-7de9-40d0-a840-3c2ff5acdb82-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.044150 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/867aa162-7de9-40d0-a840-3c2ff5acdb82-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.044237 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/867aa162-7de9-40d0-a840-3c2ff5acdb82-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.146848 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/867aa162-7de9-40d0-a840-3c2ff5acdb82-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.146930 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/867aa162-7de9-40d0-a840-3c2ff5acdb82-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.146956 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/867aa162-7de9-40d0-a840-3c2ff5acdb82-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.147019 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/867aa162-7de9-40d0-a840-3c2ff5acdb82-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.147056 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/867aa162-7de9-40d0-a840-3c2ff5acdb82-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.147127 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1daff07d-3864-4736-a71a-3bbc999db29d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1daff07d-3864-4736-a71a-3bbc999db29d\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.147156 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/867aa162-7de9-40d0-a840-3c2ff5acdb82-pod-info\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.147269 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k4sn\" (UniqueName: \"kubernetes.io/projected/867aa162-7de9-40d0-a840-3c2ff5acdb82-kube-api-access-7k4sn\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.147326 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/867aa162-7de9-40d0-a840-3c2ff5acdb82-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.147343 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/867aa162-7de9-40d0-a840-3c2ff5acdb82-server-conf\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.147367 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/867aa162-7de9-40d0-a840-3c2ff5acdb82-config-data\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.148429 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/867aa162-7de9-40d0-a840-3c2ff5acdb82-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.148502 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/867aa162-7de9-40d0-a840-3c2ff5acdb82-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.149045 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/867aa162-7de9-40d0-a840-3c2ff5acdb82-config-data\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.150166 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/867aa162-7de9-40d0-a840-3c2ff5acdb82-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.150569 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/867aa162-7de9-40d0-a840-3c2ff5acdb82-server-conf\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.151533 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.151568 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1daff07d-3864-4736-a71a-3bbc999db29d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1daff07d-3864-4736-a71a-3bbc999db29d\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ba39f9798bee59ac311ac153ee53686d284935ab086ba701684d2f8fa8f640f1/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.155214 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/867aa162-7de9-40d0-a840-3c2ff5acdb82-pod-info\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.155304 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/867aa162-7de9-40d0-a840-3c2ff5acdb82-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.156079 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/867aa162-7de9-40d0-a840-3c2ff5acdb82-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.157038 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/867aa162-7de9-40d0-a840-3c2ff5acdb82-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.167799 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k4sn\" (UniqueName: \"kubernetes.io/projected/867aa162-7de9-40d0-a840-3c2ff5acdb82-kube-api-access-7k4sn\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.234748 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1daff07d-3864-4736-a71a-3bbc999db29d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1daff07d-3864-4736-a71a-3bbc999db29d\") pod \"rabbitmq-server-0\" (UID: \"867aa162-7de9-40d0-a840-3c2ff5acdb82\") " pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.261766 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.612711 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1ec4082-af5d-46ce-a7ca-88091e668a22" path="/var/lib/kubelet/pods/f1ec4082-af5d-46ce-a7ca-88091e668a22/volumes" Feb 19 21:27:54 crc kubenswrapper[4886]: I0219 21:27:54.799507 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 21:27:55 crc kubenswrapper[4886]: I0219 21:27:55.542485 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"867aa162-7de9-40d0-a840-3c2ff5acdb82","Type":"ContainerStarted","Data":"9bc14abc85b17d4f3aa52b3bcf16874859a86dbc3fb4625bf2014562dda8fc5d"} Feb 19 21:27:55 crc kubenswrapper[4886]: I0219 21:27:55.601434 4886 scope.go:117] "RemoveContainer" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" Feb 19 21:27:55 crc kubenswrapper[4886]: E0219 21:27:55.601873 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:27:57 crc kubenswrapper[4886]: I0219 21:27:57.579456 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"867aa162-7de9-40d0-a840-3c2ff5acdb82","Type":"ContainerStarted","Data":"c64d9f29ec56137cf8fad1dfb2a1e23f1a938339d1b5aaa6265511e55066363d"} Feb 19 21:28:06 crc kubenswrapper[4886]: I0219 21:28:06.601803 4886 scope.go:117] "RemoveContainer" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" Feb 19 21:28:06 crc kubenswrapper[4886]: E0219 21:28:06.602602 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:28:20 crc kubenswrapper[4886]: I0219 21:28:20.600807 4886 scope.go:117] "RemoveContainer" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" Feb 19 21:28:20 crc kubenswrapper[4886]: E0219 21:28:20.601602 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:28:30 crc kubenswrapper[4886]: I0219 21:28:30.054178 4886 generic.go:334] "Generic (PLEG): container finished" podID="867aa162-7de9-40d0-a840-3c2ff5acdb82" containerID="c64d9f29ec56137cf8fad1dfb2a1e23f1a938339d1b5aaa6265511e55066363d" exitCode=0 Feb 19 21:28:30 crc kubenswrapper[4886]: I0219 21:28:30.054850 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"867aa162-7de9-40d0-a840-3c2ff5acdb82","Type":"ContainerDied","Data":"c64d9f29ec56137cf8fad1dfb2a1e23f1a938339d1b5aaa6265511e55066363d"} Feb 19 21:28:31 crc kubenswrapper[4886]: I0219 21:28:31.071319 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"867aa162-7de9-40d0-a840-3c2ff5acdb82","Type":"ContainerStarted","Data":"e04af9917eb97cf4ac49aca092e105a0e9e3a86a38cc2d8c32c89568f41caa82"} Feb 19 21:28:31 crc kubenswrapper[4886]: I0219 21:28:31.071992 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 21:28:31 crc kubenswrapper[4886]: I0219 21:28:31.106676 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.106651948 podStartE2EDuration="38.106651948s" podCreationTimestamp="2026-02-19 21:27:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:28:31.093440504 +0000 UTC m=+1741.721283564" watchObservedRunningTime="2026-02-19 21:28:31.106651948 +0000 UTC m=+1741.734495008" Feb 19 21:28:34 crc kubenswrapper[4886]: I0219 21:28:34.602125 4886 scope.go:117] "RemoveContainer" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" Feb 19 21:28:34 crc kubenswrapper[4886]: E0219 21:28:34.604250 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:28:44 crc kubenswrapper[4886]: I0219 21:28:44.268514 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 21:28:46 crc kubenswrapper[4886]: I0219 21:28:46.601734 4886 scope.go:117] "RemoveContainer" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" Feb 19 21:28:46 crc kubenswrapper[4886]: E0219 21:28:46.602717 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:28:50 crc kubenswrapper[4886]: I0219 21:28:50.761149 4886 scope.go:117] "RemoveContainer" containerID="8a3ed795a13828e95eb0ba7f9d867dd93bc61ee346ce5109e0bfd811d164a195" Feb 19 21:28:50 crc kubenswrapper[4886]: I0219 21:28:50.830913 4886 scope.go:117] "RemoveContainer" containerID="7c0199a62cd01659bc2bab217e45abb50d8736caa1d532925e3809d6a52a8ddd" Feb 19 21:28:50 crc kubenswrapper[4886]: I0219 21:28:50.883610 4886 scope.go:117] "RemoveContainer" containerID="a9f867fcaf17c5bb8a7ca0c6d641c1c4862ea9d7e24f2b7115be271b02bcd19e" Feb 19 21:28:50 crc kubenswrapper[4886]: I0219 21:28:50.907590 4886 scope.go:117] "RemoveContainer" containerID="a1a8eeee27ae89c20c8975f497072d6aec4e64e57f41d860a49b5ec88d2d9057" Feb 19 21:28:59 crc kubenswrapper[4886]: I0219 21:28:59.602045 4886 scope.go:117] "RemoveContainer" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" Feb 19 21:28:59 crc kubenswrapper[4886]: E0219 21:28:59.603498 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:29:10 crc kubenswrapper[4886]: I0219 21:29:10.609911 4886 scope.go:117] "RemoveContainer" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" Feb 19 21:29:10 crc kubenswrapper[4886]: E0219 21:29:10.610756 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:29:14 crc kubenswrapper[4886]: I0219 21:29:14.058486 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-cvm86"] Feb 19 21:29:14 crc kubenswrapper[4886]: I0219 21:29:14.070517 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-cvm86"] Feb 19 21:29:14 crc kubenswrapper[4886]: I0219 21:29:14.629596 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06257e23-2243-4646-a2fb-95b947d5c466" path="/var/lib/kubelet/pods/06257e23-2243-4646-a2fb-95b947d5c466/volumes" Feb 19 21:29:15 crc kubenswrapper[4886]: I0219 21:29:15.038447 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e29c-account-create-update-2897c"] Feb 19 21:29:15 crc kubenswrapper[4886]: I0219 21:29:15.050801 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e29c-account-create-update-2897c"] Feb 19 21:29:16 crc kubenswrapper[4886]: I0219 21:29:16.647354 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="240c4666-ec12-4498-9f53-dd95d7a33ed4" path="/var/lib/kubelet/pods/240c4666-ec12-4498-9f53-dd95d7a33ed4/volumes" Feb 19 21:29:21 crc kubenswrapper[4886]: I0219 21:29:21.035919 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-d082-account-create-update-2h5pl"] Feb 19 21:29:21 crc kubenswrapper[4886]: I0219 21:29:21.048488 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-d082-account-create-update-2h5pl"] Feb 19 21:29:22 crc kubenswrapper[4886]: I0219 21:29:22.044091 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-5lpl5"] Feb 19 21:29:22 crc kubenswrapper[4886]: I0219 21:29:22.055421 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-5lpl5"] Feb 19 21:29:22 crc kubenswrapper[4886]: I0219 21:29:22.065409 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vmhgw"] Feb 19 21:29:22 crc kubenswrapper[4886]: I0219 21:29:22.077662 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-e0b5-account-create-update-whzj7"] Feb 19 21:29:22 crc kubenswrapper[4886]: I0219 21:29:22.088544 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-k78l4"] Feb 19 21:29:22 crc kubenswrapper[4886]: I0219 21:29:22.099000 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-e0b5-account-create-update-whzj7"] Feb 19 21:29:22 crc kubenswrapper[4886]: I0219 21:29:22.109552 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-588e-account-create-update-tkjhl"] Feb 19 21:29:22 crc kubenswrapper[4886]: I0219 21:29:22.121425 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vmhgw"] Feb 19 21:29:22 crc kubenswrapper[4886]: I0219 21:29:22.136645 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-k78l4"] Feb 19 21:29:22 crc kubenswrapper[4886]: I0219 21:29:22.153301 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-588e-account-create-update-tkjhl"] Feb 19 21:29:22 crc kubenswrapper[4886]: I0219 21:29:22.623482 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce" path="/var/lib/kubelet/pods/3c42c45f-c36c-45f6-93dc-8d8fa0ed69ce/volumes" Feb 19 21:29:22 crc kubenswrapper[4886]: I0219 21:29:22.626663 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4409a578-5632-41e1-bcb2-015deecc0e1a" path="/var/lib/kubelet/pods/4409a578-5632-41e1-bcb2-015deecc0e1a/volumes" Feb 19 21:29:22 crc kubenswrapper[4886]: I0219 21:29:22.630035 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62ce6552-da24-4a92-9474-47b352bd969e" path="/var/lib/kubelet/pods/62ce6552-da24-4a92-9474-47b352bd969e/volumes" Feb 19 21:29:22 crc kubenswrapper[4886]: I0219 21:29:22.632735 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f78fcbc-7dfd-45f5-8d6a-ce814efb3303" path="/var/lib/kubelet/pods/8f78fcbc-7dfd-45f5-8d6a-ce814efb3303/volumes" Feb 19 21:29:22 crc kubenswrapper[4886]: I0219 21:29:22.636959 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adccc6e1-c1da-45d2-aea8-8a381b733fef" path="/var/lib/kubelet/pods/adccc6e1-c1da-45d2-aea8-8a381b733fef/volumes" Feb 19 21:29:22 crc kubenswrapper[4886]: I0219 21:29:22.640839 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1e6646f-4bdd-478f-a451-a16e2bfc2c08" path="/var/lib/kubelet/pods/d1e6646f-4bdd-478f-a451-a16e2bfc2c08/volumes" Feb 19 21:29:25 crc kubenswrapper[4886]: I0219 21:29:25.602522 4886 scope.go:117] "RemoveContainer" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" Feb 19 21:29:25 crc kubenswrapper[4886]: E0219 21:29:25.603591 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:29:31 crc kubenswrapper[4886]: I0219 21:29:31.917526 4886 generic.go:334] "Generic (PLEG): container finished" podID="15a12ee8-44ec-4c1d-9b3b-b78e49eea138" containerID="4af2358ad0ef35ed5ebe880c2f4693627cf640da7524bd7def0689b2117090f4" exitCode=0 Feb 19 21:29:31 crc kubenswrapper[4886]: I0219 21:29:31.917627 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-snc24" event={"ID":"15a12ee8-44ec-4c1d-9b3b-b78e49eea138","Type":"ContainerDied","Data":"4af2358ad0ef35ed5ebe880c2f4693627cf640da7524bd7def0689b2117090f4"} Feb 19 21:29:33 crc kubenswrapper[4886]: I0219 21:29:33.448406 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-snc24" Feb 19 21:29:33 crc kubenswrapper[4886]: I0219 21:29:33.555497 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdgvr\" (UniqueName: \"kubernetes.io/projected/15a12ee8-44ec-4c1d-9b3b-b78e49eea138-kube-api-access-kdgvr\") pod \"15a12ee8-44ec-4c1d-9b3b-b78e49eea138\" (UID: \"15a12ee8-44ec-4c1d-9b3b-b78e49eea138\") " Feb 19 21:29:33 crc kubenswrapper[4886]: I0219 21:29:33.555712 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15a12ee8-44ec-4c1d-9b3b-b78e49eea138-inventory\") pod \"15a12ee8-44ec-4c1d-9b3b-b78e49eea138\" (UID: \"15a12ee8-44ec-4c1d-9b3b-b78e49eea138\") " Feb 19 21:29:33 crc kubenswrapper[4886]: I0219 21:29:33.555811 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15a12ee8-44ec-4c1d-9b3b-b78e49eea138-ssh-key-openstack-edpm-ipam\") pod \"15a12ee8-44ec-4c1d-9b3b-b78e49eea138\" (UID: \"15a12ee8-44ec-4c1d-9b3b-b78e49eea138\") " Feb 19 21:29:33 crc kubenswrapper[4886]: I0219 21:29:33.556027 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a12ee8-44ec-4c1d-9b3b-b78e49eea138-bootstrap-combined-ca-bundle\") pod \"15a12ee8-44ec-4c1d-9b3b-b78e49eea138\" (UID: \"15a12ee8-44ec-4c1d-9b3b-b78e49eea138\") " Feb 19 21:29:33 crc kubenswrapper[4886]: I0219 21:29:33.561245 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15a12ee8-44ec-4c1d-9b3b-b78e49eea138-kube-api-access-kdgvr" (OuterVolumeSpecName: "kube-api-access-kdgvr") pod "15a12ee8-44ec-4c1d-9b3b-b78e49eea138" (UID: "15a12ee8-44ec-4c1d-9b3b-b78e49eea138"). InnerVolumeSpecName "kube-api-access-kdgvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:29:33 crc kubenswrapper[4886]: I0219 21:29:33.566139 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a12ee8-44ec-4c1d-9b3b-b78e49eea138-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "15a12ee8-44ec-4c1d-9b3b-b78e49eea138" (UID: "15a12ee8-44ec-4c1d-9b3b-b78e49eea138"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:29:33 crc kubenswrapper[4886]: I0219 21:29:33.611964 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a12ee8-44ec-4c1d-9b3b-b78e49eea138-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "15a12ee8-44ec-4c1d-9b3b-b78e49eea138" (UID: "15a12ee8-44ec-4c1d-9b3b-b78e49eea138"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:29:33 crc kubenswrapper[4886]: I0219 21:29:33.613756 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a12ee8-44ec-4c1d-9b3b-b78e49eea138-inventory" (OuterVolumeSpecName: "inventory") pod "15a12ee8-44ec-4c1d-9b3b-b78e49eea138" (UID: "15a12ee8-44ec-4c1d-9b3b-b78e49eea138"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:29:33 crc kubenswrapper[4886]: I0219 21:29:33.660312 4886 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a12ee8-44ec-4c1d-9b3b-b78e49eea138-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:29:33 crc kubenswrapper[4886]: I0219 21:29:33.660375 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdgvr\" (UniqueName: \"kubernetes.io/projected/15a12ee8-44ec-4c1d-9b3b-b78e49eea138-kube-api-access-kdgvr\") on node \"crc\" DevicePath \"\"" Feb 19 21:29:33 crc kubenswrapper[4886]: I0219 21:29:33.660403 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15a12ee8-44ec-4c1d-9b3b-b78e49eea138-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 21:29:33 crc kubenswrapper[4886]: I0219 21:29:33.660423 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15a12ee8-44ec-4c1d-9b3b-b78e49eea138-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 21:29:33 crc kubenswrapper[4886]: I0219 21:29:33.948062 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-snc24" event={"ID":"15a12ee8-44ec-4c1d-9b3b-b78e49eea138","Type":"ContainerDied","Data":"44d1136b8a545bcd9c8bb0faceb2ebff6f7e87e2d515bf745a8dbb085ff18256"} Feb 19 21:29:33 crc kubenswrapper[4886]: I0219 21:29:33.948565 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44d1136b8a545bcd9c8bb0faceb2ebff6f7e87e2d515bf745a8dbb085ff18256" Feb 19 21:29:33 crc kubenswrapper[4886]: I0219 21:29:33.948208 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-snc24" Feb 19 21:29:34 crc kubenswrapper[4886]: I0219 21:29:34.074689 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2"] Feb 19 21:29:34 crc kubenswrapper[4886]: E0219 21:29:34.075338 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a12ee8-44ec-4c1d-9b3b-b78e49eea138" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 21:29:34 crc kubenswrapper[4886]: I0219 21:29:34.075357 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a12ee8-44ec-4c1d-9b3b-b78e49eea138" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 21:29:34 crc kubenswrapper[4886]: I0219 21:29:34.075628 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="15a12ee8-44ec-4c1d-9b3b-b78e49eea138" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 21:29:34 crc kubenswrapper[4886]: I0219 21:29:34.076608 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2" Feb 19 21:29:34 crc kubenswrapper[4886]: I0219 21:29:34.079121 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 21:29:34 crc kubenswrapper[4886]: I0219 21:29:34.079549 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 21:29:34 crc kubenswrapper[4886]: I0219 21:29:34.079651 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 21:29:34 crc kubenswrapper[4886]: I0219 21:29:34.091891 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2"] Feb 19 21:29:34 crc kubenswrapper[4886]: I0219 21:29:34.101880 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vq4ls" Feb 19 21:29:34 crc kubenswrapper[4886]: I0219 21:29:34.173835 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23a8dddf-44f9-4f14-9886-1eab785efe58-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2\" (UID: \"23a8dddf-44f9-4f14-9886-1eab785efe58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2" Feb 19 21:29:34 crc kubenswrapper[4886]: I0219 21:29:34.173883 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23a8dddf-44f9-4f14-9886-1eab785efe58-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2\" (UID: \"23a8dddf-44f9-4f14-9886-1eab785efe58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2" Feb 19 21:29:34 crc kubenswrapper[4886]: I0219 21:29:34.174198 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxbhg\" (UniqueName: \"kubernetes.io/projected/23a8dddf-44f9-4f14-9886-1eab785efe58-kube-api-access-nxbhg\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2\" (UID: \"23a8dddf-44f9-4f14-9886-1eab785efe58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2" Feb 19 21:29:34 crc kubenswrapper[4886]: I0219 21:29:34.277787 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxbhg\" (UniqueName: \"kubernetes.io/projected/23a8dddf-44f9-4f14-9886-1eab785efe58-kube-api-access-nxbhg\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2\" (UID: \"23a8dddf-44f9-4f14-9886-1eab785efe58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2" Feb 19 21:29:34 crc kubenswrapper[4886]: I0219 21:29:34.278002 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23a8dddf-44f9-4f14-9886-1eab785efe58-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2\" (UID: \"23a8dddf-44f9-4f14-9886-1eab785efe58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2" Feb 19 21:29:34 crc kubenswrapper[4886]: I0219 21:29:34.278033 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23a8dddf-44f9-4f14-9886-1eab785efe58-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2\" (UID: \"23a8dddf-44f9-4f14-9886-1eab785efe58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2" Feb 19 21:29:34 crc kubenswrapper[4886]: I0219 21:29:34.286520 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23a8dddf-44f9-4f14-9886-1eab785efe58-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2\" (UID: \"23a8dddf-44f9-4f14-9886-1eab785efe58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2" Feb 19 21:29:34 crc kubenswrapper[4886]: I0219 21:29:34.292149 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23a8dddf-44f9-4f14-9886-1eab785efe58-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2\" (UID: \"23a8dddf-44f9-4f14-9886-1eab785efe58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2" Feb 19 21:29:34 crc kubenswrapper[4886]: I0219 21:29:34.297368 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxbhg\" (UniqueName: \"kubernetes.io/projected/23a8dddf-44f9-4f14-9886-1eab785efe58-kube-api-access-nxbhg\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2\" (UID: \"23a8dddf-44f9-4f14-9886-1eab785efe58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2" Feb 19 21:29:34 crc kubenswrapper[4886]: I0219 21:29:34.406535 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2" Feb 19 21:29:34 crc kubenswrapper[4886]: I0219 21:29:34.989904 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2"] Feb 19 21:29:34 crc kubenswrapper[4886]: I0219 21:29:34.990558 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 21:29:35 crc kubenswrapper[4886]: I0219 21:29:35.974635 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2" event={"ID":"23a8dddf-44f9-4f14-9886-1eab785efe58","Type":"ContainerStarted","Data":"ddac24cd5e0e77aa0b9660d7589ead13bf4010527fda3e9ec71111aeed992436"} Feb 19 21:29:35 crc kubenswrapper[4886]: I0219 21:29:35.974969 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2" event={"ID":"23a8dddf-44f9-4f14-9886-1eab785efe58","Type":"ContainerStarted","Data":"505fbcdfc33fa981116aa19ea9175818023c1de2d10e1e3a70beace44732f946"} Feb 19 21:29:36 crc kubenswrapper[4886]: I0219 21:29:36.035575 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2" podStartSLOduration=1.615380697 podStartE2EDuration="2.035549625s" podCreationTimestamp="2026-02-19 21:29:34 +0000 UTC" firstStartedPulling="2026-02-19 21:29:34.990279198 +0000 UTC m=+1805.618122258" lastFinishedPulling="2026-02-19 21:29:35.410448136 +0000 UTC m=+1806.038291186" observedRunningTime="2026-02-19 21:29:35.992182518 +0000 UTC m=+1806.620025598" watchObservedRunningTime="2026-02-19 21:29:36.035549625 +0000 UTC m=+1806.663392685" Feb 19 21:29:39 crc kubenswrapper[4886]: I0219 21:29:39.608087 4886 scope.go:117] "RemoveContainer" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" Feb 19 21:29:39 crc kubenswrapper[4886]: E0219 21:29:39.609333 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:29:43 crc kubenswrapper[4886]: I0219 21:29:43.052718 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-c4ab-account-create-update-zxvnm"] Feb 19 21:29:43 crc kubenswrapper[4886]: I0219 21:29:43.065228 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-c4ab-account-create-update-zxvnm"] Feb 19 21:29:44 crc kubenswrapper[4886]: I0219 21:29:44.046332 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-z8xjt"] Feb 19 21:29:44 crc kubenswrapper[4886]: I0219 21:29:44.066238 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-z8xjt"] Feb 19 21:29:44 crc kubenswrapper[4886]: I0219 21:29:44.081567 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-6w6vv"] Feb 19 21:29:44 crc kubenswrapper[4886]: I0219 21:29:44.129188 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-6w6vv"] Feb 19 21:29:44 crc kubenswrapper[4886]: I0219 21:29:44.621957 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e585e30-61e9-46d4-9317-561c8e70c60b" path="/var/lib/kubelet/pods/2e585e30-61e9-46d4-9317-561c8e70c60b/volumes" Feb 19 21:29:44 crc kubenswrapper[4886]: I0219 21:29:44.623748 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7774da6-bf0a-41f3-9ca6-26ca8c567053" path="/var/lib/kubelet/pods/b7774da6-bf0a-41f3-9ca6-26ca8c567053/volumes" Feb 19 21:29:44 crc kubenswrapper[4886]: I0219 21:29:44.625292 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5022181-22f3-477d-83c1-94270a8c9da3" path="/var/lib/kubelet/pods/e5022181-22f3-477d-83c1-94270a8c9da3/volumes" Feb 19 21:29:50 crc kubenswrapper[4886]: I0219 21:29:50.613042 4886 scope.go:117] "RemoveContainer" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" Feb 19 21:29:51 crc kubenswrapper[4886]: I0219 21:29:51.023979 4886 scope.go:117] "RemoveContainer" containerID="034f95ca8c9086225c1e862d6f52a18213ac7a971623c65163c9a41a863fcbbd" Feb 19 21:29:51 crc kubenswrapper[4886]: I0219 21:29:51.048664 4886 scope.go:117] "RemoveContainer" containerID="2bab4fa65f6ff951fc44be98722e578c8fb52c4fce42310c55441a42e14299d5" Feb 19 21:29:51 crc kubenswrapper[4886]: I0219 21:29:51.111074 4886 scope.go:117] "RemoveContainer" containerID="fce7bee970b236d413937a2161a01b78eeb3004e421cc546a5102e1dd9969645" Feb 19 21:29:51 crc kubenswrapper[4886]: I0219 21:29:51.168171 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerStarted","Data":"35564522d2e7778839438b05f01b84ea628562673e5c8941499cfff7719ef457"} Feb 19 21:29:51 crc kubenswrapper[4886]: I0219 21:29:51.188141 4886 scope.go:117] "RemoveContainer" containerID="94e84cdc1a550877f49efa7d355db2c29b46519522ff0e0605a24c1be9aca8f9" Feb 19 21:29:51 crc kubenswrapper[4886]: I0219 21:29:51.263335 4886 scope.go:117] "RemoveContainer" containerID="720d6621ef921e61233a34373a9f281e31ebed1d3db64d1d798a17262a306598" Feb 19 21:29:51 crc kubenswrapper[4886]: I0219 21:29:51.310820 4886 scope.go:117] "RemoveContainer" containerID="64c49f35dfdcf1596f73b6f7eedf7f760735004db15cc5cf577a124ec72c5a7f" Feb 19 21:29:51 crc kubenswrapper[4886]: I0219 21:29:51.345734 4886 scope.go:117] "RemoveContainer" containerID="b114aa8efb6597102aad65bd2a9a09282ded2a312a2b4223257dfc74eaa84420" Feb 19 21:29:51 crc kubenswrapper[4886]: I0219 21:29:51.365817 4886 scope.go:117] "RemoveContainer" containerID="e3376baf903f20790cba4588626b53cdaa11fd73a3818992364894e2cff04c0e" Feb 19 21:29:51 crc kubenswrapper[4886]: I0219 21:29:51.396193 4886 scope.go:117] "RemoveContainer" containerID="234ca7b7dd45d32287887986e280c069c3ffc9f932923853052a84dc2ffc4649" Feb 19 21:29:51 crc kubenswrapper[4886]: I0219 21:29:51.420490 4886 scope.go:117] "RemoveContainer" containerID="f436bb6b015adf9ee0f489cbd0a473b2e34e58fd2b28b67254f4ce23cc7d3eb0" Feb 19 21:29:51 crc kubenswrapper[4886]: I0219 21:29:51.450471 4886 scope.go:117] "RemoveContainer" containerID="62e6f919c061cea4a3c4da82ab72a38475ac6e9bc5f5bf9970aa433729e5a49c" Feb 19 21:29:51 crc kubenswrapper[4886]: I0219 21:29:51.471618 4886 scope.go:117] "RemoveContainer" containerID="d4309ae7e0febe123227ddf9989775052dd4e03144fb3b9cff3f22d8b30ef772" Feb 19 21:30:00 crc kubenswrapper[4886]: I0219 21:30:00.154404 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525610-r8f7l"] Feb 19 21:30:00 crc kubenswrapper[4886]: I0219 21:30:00.156939 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525610-r8f7l" Feb 19 21:30:00 crc kubenswrapper[4886]: I0219 21:30:00.161522 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 21:30:00 crc kubenswrapper[4886]: I0219 21:30:00.161675 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 21:30:00 crc kubenswrapper[4886]: I0219 21:30:00.181793 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525610-r8f7l"] Feb 19 21:30:00 crc kubenswrapper[4886]: I0219 21:30:00.191855 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2484eb25-a176-45d0-aa84-91ea87297c90-secret-volume\") pod \"collect-profiles-29525610-r8f7l\" (UID: \"2484eb25-a176-45d0-aa84-91ea87297c90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525610-r8f7l" Feb 19 21:30:00 crc kubenswrapper[4886]: I0219 21:30:00.192008 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2484eb25-a176-45d0-aa84-91ea87297c90-config-volume\") pod \"collect-profiles-29525610-r8f7l\" (UID: \"2484eb25-a176-45d0-aa84-91ea87297c90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525610-r8f7l" Feb 19 21:30:00 crc kubenswrapper[4886]: I0219 21:30:00.192154 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrgl7\" (UniqueName: \"kubernetes.io/projected/2484eb25-a176-45d0-aa84-91ea87297c90-kube-api-access-rrgl7\") pod \"collect-profiles-29525610-r8f7l\" (UID: \"2484eb25-a176-45d0-aa84-91ea87297c90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525610-r8f7l" Feb 19 21:30:00 crc kubenswrapper[4886]: I0219 21:30:00.294193 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrgl7\" (UniqueName: \"kubernetes.io/projected/2484eb25-a176-45d0-aa84-91ea87297c90-kube-api-access-rrgl7\") pod \"collect-profiles-29525610-r8f7l\" (UID: \"2484eb25-a176-45d0-aa84-91ea87297c90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525610-r8f7l" Feb 19 21:30:00 crc kubenswrapper[4886]: I0219 21:30:00.294323 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2484eb25-a176-45d0-aa84-91ea87297c90-secret-volume\") pod \"collect-profiles-29525610-r8f7l\" (UID: \"2484eb25-a176-45d0-aa84-91ea87297c90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525610-r8f7l" Feb 19 21:30:00 crc kubenswrapper[4886]: I0219 21:30:00.294448 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2484eb25-a176-45d0-aa84-91ea87297c90-config-volume\") pod \"collect-profiles-29525610-r8f7l\" (UID: \"2484eb25-a176-45d0-aa84-91ea87297c90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525610-r8f7l" Feb 19 21:30:00 crc kubenswrapper[4886]: I0219 21:30:00.295373 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2484eb25-a176-45d0-aa84-91ea87297c90-config-volume\") pod \"collect-profiles-29525610-r8f7l\" (UID: \"2484eb25-a176-45d0-aa84-91ea87297c90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525610-r8f7l" Feb 19 21:30:00 crc kubenswrapper[4886]: I0219 21:30:00.301938 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2484eb25-a176-45d0-aa84-91ea87297c90-secret-volume\") pod \"collect-profiles-29525610-r8f7l\" (UID: \"2484eb25-a176-45d0-aa84-91ea87297c90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525610-r8f7l" Feb 19 21:30:00 crc kubenswrapper[4886]: I0219 21:30:00.311149 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrgl7\" (UniqueName: \"kubernetes.io/projected/2484eb25-a176-45d0-aa84-91ea87297c90-kube-api-access-rrgl7\") pod \"collect-profiles-29525610-r8f7l\" (UID: \"2484eb25-a176-45d0-aa84-91ea87297c90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525610-r8f7l" Feb 19 21:30:00 crc kubenswrapper[4886]: I0219 21:30:00.491980 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525610-r8f7l" Feb 19 21:30:01 crc kubenswrapper[4886]: I0219 21:30:01.009852 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525610-r8f7l"] Feb 19 21:30:01 crc kubenswrapper[4886]: I0219 21:30:01.289928 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525610-r8f7l" event={"ID":"2484eb25-a176-45d0-aa84-91ea87297c90","Type":"ContainerStarted","Data":"9d4c590245dcb5d08c3997a6797ce4ef4dd9814d0f0f6320b7479d4be99c852f"} Feb 19 21:30:01 crc kubenswrapper[4886]: I0219 21:30:01.289979 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525610-r8f7l" event={"ID":"2484eb25-a176-45d0-aa84-91ea87297c90","Type":"ContainerStarted","Data":"e4f5a1c2a15542c6e48ad6ff38d28a7f94fc24cec13a9a9dcae05d04e60c3e2a"} Feb 19 21:30:01 crc kubenswrapper[4886]: I0219 21:30:01.306205 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29525610-r8f7l" podStartSLOduration=1.306184767 podStartE2EDuration="1.306184767s" podCreationTimestamp="2026-02-19 21:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:30:01.303236715 +0000 UTC m=+1831.931079765" watchObservedRunningTime="2026-02-19 21:30:01.306184767 +0000 UTC m=+1831.934027817" Feb 19 21:30:02 crc kubenswrapper[4886]: I0219 21:30:02.306630 4886 generic.go:334] "Generic (PLEG): container finished" podID="2484eb25-a176-45d0-aa84-91ea87297c90" containerID="9d4c590245dcb5d08c3997a6797ce4ef4dd9814d0f0f6320b7479d4be99c852f" exitCode=0 Feb 19 21:30:02 crc kubenswrapper[4886]: I0219 21:30:02.306727 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525610-r8f7l" event={"ID":"2484eb25-a176-45d0-aa84-91ea87297c90","Type":"ContainerDied","Data":"9d4c590245dcb5d08c3997a6797ce4ef4dd9814d0f0f6320b7479d4be99c852f"} Feb 19 21:30:03 crc kubenswrapper[4886]: I0219 21:30:03.803177 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525610-r8f7l" Feb 19 21:30:03 crc kubenswrapper[4886]: I0219 21:30:03.883477 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrgl7\" (UniqueName: \"kubernetes.io/projected/2484eb25-a176-45d0-aa84-91ea87297c90-kube-api-access-rrgl7\") pod \"2484eb25-a176-45d0-aa84-91ea87297c90\" (UID: \"2484eb25-a176-45d0-aa84-91ea87297c90\") " Feb 19 21:30:03 crc kubenswrapper[4886]: I0219 21:30:03.883748 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2484eb25-a176-45d0-aa84-91ea87297c90-config-volume\") pod \"2484eb25-a176-45d0-aa84-91ea87297c90\" (UID: \"2484eb25-a176-45d0-aa84-91ea87297c90\") " Feb 19 21:30:03 crc kubenswrapper[4886]: I0219 21:30:03.883881 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2484eb25-a176-45d0-aa84-91ea87297c90-secret-volume\") pod \"2484eb25-a176-45d0-aa84-91ea87297c90\" (UID: \"2484eb25-a176-45d0-aa84-91ea87297c90\") " Feb 19 21:30:03 crc kubenswrapper[4886]: I0219 21:30:03.884673 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2484eb25-a176-45d0-aa84-91ea87297c90-config-volume" (OuterVolumeSpecName: "config-volume") pod "2484eb25-a176-45d0-aa84-91ea87297c90" (UID: "2484eb25-a176-45d0-aa84-91ea87297c90"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:30:03 crc kubenswrapper[4886]: I0219 21:30:03.884896 4886 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2484eb25-a176-45d0-aa84-91ea87297c90-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 21:30:03 crc kubenswrapper[4886]: I0219 21:30:03.893584 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2484eb25-a176-45d0-aa84-91ea87297c90-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2484eb25-a176-45d0-aa84-91ea87297c90" (UID: "2484eb25-a176-45d0-aa84-91ea87297c90"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:30:03 crc kubenswrapper[4886]: I0219 21:30:03.893643 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2484eb25-a176-45d0-aa84-91ea87297c90-kube-api-access-rrgl7" (OuterVolumeSpecName: "kube-api-access-rrgl7") pod "2484eb25-a176-45d0-aa84-91ea87297c90" (UID: "2484eb25-a176-45d0-aa84-91ea87297c90"). InnerVolumeSpecName "kube-api-access-rrgl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:30:03 crc kubenswrapper[4886]: I0219 21:30:03.987756 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrgl7\" (UniqueName: \"kubernetes.io/projected/2484eb25-a176-45d0-aa84-91ea87297c90-kube-api-access-rrgl7\") on node \"crc\" DevicePath \"\"" Feb 19 21:30:03 crc kubenswrapper[4886]: I0219 21:30:03.987797 4886 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2484eb25-a176-45d0-aa84-91ea87297c90-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 21:30:04 crc kubenswrapper[4886]: I0219 21:30:04.326439 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525610-r8f7l" event={"ID":"2484eb25-a176-45d0-aa84-91ea87297c90","Type":"ContainerDied","Data":"e4f5a1c2a15542c6e48ad6ff38d28a7f94fc24cec13a9a9dcae05d04e60c3e2a"} Feb 19 21:30:04 crc kubenswrapper[4886]: I0219 21:30:04.326487 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4f5a1c2a15542c6e48ad6ff38d28a7f94fc24cec13a9a9dcae05d04e60c3e2a" Feb 19 21:30:04 crc kubenswrapper[4886]: I0219 21:30:04.326507 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525610-r8f7l" Feb 19 21:30:05 crc kubenswrapper[4886]: I0219 21:30:05.074227 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-cq5sp"] Feb 19 21:30:05 crc kubenswrapper[4886]: I0219 21:30:05.088491 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-cq5sp"] Feb 19 21:30:05 crc kubenswrapper[4886]: I0219 21:30:05.100885 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-9d9f-account-create-update-ld55p"] Feb 19 21:30:05 crc kubenswrapper[4886]: I0219 21:30:05.110763 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-9d9f-account-create-update-ld55p"] Feb 19 21:30:06 crc kubenswrapper[4886]: I0219 21:30:06.626195 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="248cf37f-1bcc-4904-bba2-8d0398f694df" path="/var/lib/kubelet/pods/248cf37f-1bcc-4904-bba2-8d0398f694df/volumes" Feb 19 21:30:06 crc kubenswrapper[4886]: I0219 21:30:06.629536 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6850c0dd-a8d1-4fa7-83d5-e224be6efcd4" path="/var/lib/kubelet/pods/6850c0dd-a8d1-4fa7-83d5-e224be6efcd4/volumes" Feb 19 21:30:09 crc kubenswrapper[4886]: I0219 21:30:09.049055 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-wfngr"] Feb 19 21:30:09 crc kubenswrapper[4886]: I0219 21:30:09.063368 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-ngwj9"] Feb 19 21:30:09 crc kubenswrapper[4886]: I0219 21:30:09.074405 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-2b4a-account-create-update-zqq2h"] Feb 19 21:30:09 crc kubenswrapper[4886]: I0219 21:30:09.085519 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5058-account-create-update-zsfmc"] Feb 19 21:30:09 crc kubenswrapper[4886]: I0219 21:30:09.104543 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-wfngr"] Feb 19 21:30:09 crc kubenswrapper[4886]: I0219 21:30:09.120483 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-js4vm"] Feb 19 21:30:09 crc kubenswrapper[4886]: I0219 21:30:09.131547 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-5058-account-create-update-zsfmc"] Feb 19 21:30:09 crc kubenswrapper[4886]: I0219 21:30:09.143748 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-ngwj9"] Feb 19 21:30:09 crc kubenswrapper[4886]: I0219 21:30:09.154079 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-98fw7"] Feb 19 21:30:09 crc kubenswrapper[4886]: I0219 21:30:09.163592 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-2b4a-account-create-update-zqq2h"] Feb 19 21:30:09 crc kubenswrapper[4886]: I0219 21:30:09.173875 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-js4vm"] Feb 19 21:30:09 crc kubenswrapper[4886]: I0219 21:30:09.187141 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-98fw7"] Feb 19 21:30:09 crc kubenswrapper[4886]: I0219 21:30:09.195134 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-c1e3-account-create-update-s6vdn"] Feb 19 21:30:09 crc kubenswrapper[4886]: I0219 21:30:09.205048 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-c1e3-account-create-update-s6vdn"] Feb 19 21:30:10 crc kubenswrapper[4886]: I0219 21:30:10.623076 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22179eed-5249-458a-b35d-7f934c393c87" path="/var/lib/kubelet/pods/22179eed-5249-458a-b35d-7f934c393c87/volumes" Feb 19 21:30:10 crc kubenswrapper[4886]: I0219 21:30:10.625532 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a2f507c-d8b5-46b9-82f7-dd0e5be787c4" path="/var/lib/kubelet/pods/4a2f507c-d8b5-46b9-82f7-dd0e5be787c4/volumes" Feb 19 21:30:10 crc kubenswrapper[4886]: I0219 21:30:10.628234 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16" path="/var/lib/kubelet/pods/4f9a3ab0-8af9-4b00-bd2f-f9eb7e218a16/volumes" Feb 19 21:30:10 crc kubenswrapper[4886]: I0219 21:30:10.630537 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a" path="/var/lib/kubelet/pods/8fbcc22a-93ee-44c9-b600-f02eeb8c8e4a/volumes" Feb 19 21:30:10 crc kubenswrapper[4886]: I0219 21:30:10.632646 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9741740a-a65c-49f5-8cdb-156b0d3037ec" path="/var/lib/kubelet/pods/9741740a-a65c-49f5-8cdb-156b0d3037ec/volumes" Feb 19 21:30:10 crc kubenswrapper[4886]: I0219 21:30:10.634156 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afb8222a-a662-49c8-89f5-7fe1f193adfd" path="/var/lib/kubelet/pods/afb8222a-a662-49c8-89f5-7fe1f193adfd/volumes" Feb 19 21:30:10 crc kubenswrapper[4886]: I0219 21:30:10.637055 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f652b0b8-6eee-4ebb-b0ad-e22de89080a6" path="/var/lib/kubelet/pods/f652b0b8-6eee-4ebb-b0ad-e22de89080a6/volumes" Feb 19 21:30:16 crc kubenswrapper[4886]: I0219 21:30:16.053304 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-twxkg"] Feb 19 21:30:16 crc kubenswrapper[4886]: I0219 21:30:16.072871 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-twxkg"] Feb 19 21:30:16 crc kubenswrapper[4886]: I0219 21:30:16.612626 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54149206-2ab9-4e9e-be08-86d91ea986f9" path="/var/lib/kubelet/pods/54149206-2ab9-4e9e-be08-86d91ea986f9/volumes" Feb 19 21:30:46 crc kubenswrapper[4886]: I0219 21:30:46.062463 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-jjrqx"] Feb 19 21:30:46 crc kubenswrapper[4886]: I0219 21:30:46.075653 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-jjrqx"] Feb 19 21:30:46 crc kubenswrapper[4886]: I0219 21:30:46.628017 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f14cfdd-608a-42ab-9195-b9773729d874" path="/var/lib/kubelet/pods/9f14cfdd-608a-42ab-9195-b9773729d874/volumes" Feb 19 21:30:51 crc kubenswrapper[4886]: I0219 21:30:51.783919 4886 scope.go:117] "RemoveContainer" containerID="ce48641e412b2ea34f23c4d2495d0621fa6f8ba7d771b867cbd84b8c6785402e" Feb 19 21:30:51 crc kubenswrapper[4886]: I0219 21:30:51.847239 4886 scope.go:117] "RemoveContainer" containerID="179ed607b04840d3418f1ac5c0089784c6fb8a269c750b911c32e4d6d0169df3" Feb 19 21:30:51 crc kubenswrapper[4886]: I0219 21:30:51.886650 4886 scope.go:117] "RemoveContainer" containerID="df09e54b41593dbb719f60ec1e93c8fafaf7787afc72b1f8f5aefc96a8560505" Feb 19 21:30:51 crc kubenswrapper[4886]: I0219 21:30:51.938236 4886 scope.go:117] "RemoveContainer" containerID="cfc82c668ef18d4fec54685f8f41f5fdb8c6748e1b51de731079d7d3ccecd280" Feb 19 21:30:51 crc kubenswrapper[4886]: I0219 21:30:51.987329 4886 scope.go:117] "RemoveContainer" containerID="31d57ba50e3584bbd09e6d91a9005f33f9a8b7dcd721a458766e37bf3779e6ab" Feb 19 21:30:52 crc kubenswrapper[4886]: I0219 21:30:52.051581 4886 scope.go:117] "RemoveContainer" containerID="f0574f2c0097bc69f4a4c95ddc2bb03e213f0fed2e56eaf98cfa2c71a6d7d97f" Feb 19 21:30:52 crc kubenswrapper[4886]: I0219 21:30:52.113103 4886 scope.go:117] "RemoveContainer" containerID="19f1d2fd61c1ad3bae5bd5494b56da7c2ba78579e811c788face7bf47acd332c" Feb 19 21:30:52 crc kubenswrapper[4886]: I0219 21:30:52.143821 4886 scope.go:117] "RemoveContainer" containerID="06a3316e3dd4608d0a992b6f55507d230086191d7f9ea80b7e5b294d6bd9e786" Feb 19 21:30:52 crc kubenswrapper[4886]: I0219 21:30:52.163735 4886 scope.go:117] "RemoveContainer" containerID="096c3dd21cce6e083128ca288bbf85534c699ceac1e9e3c7bf351bf4ee4f6fa7" Feb 19 21:30:52 crc kubenswrapper[4886]: I0219 21:30:52.183341 4886 scope.go:117] "RemoveContainer" containerID="91dfd157308c118eb74b6d09d731bed9c28d03ea9fb702843e79541cda906dda" Feb 19 21:30:52 crc kubenswrapper[4886]: I0219 21:30:52.208349 4886 scope.go:117] "RemoveContainer" containerID="719c992027f342d6e8a7d8c3155c8387e91f74d02a6dac127beb416cbd0e07da" Feb 19 21:30:59 crc kubenswrapper[4886]: I0219 21:30:59.045226 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-8fg8c"] Feb 19 21:30:59 crc kubenswrapper[4886]: I0219 21:30:59.065389 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-8fg8c"] Feb 19 21:31:00 crc kubenswrapper[4886]: I0219 21:31:00.622535 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cfadf95-ec64-4d3a-856f-22c0a4b65d54" path="/var/lib/kubelet/pods/2cfadf95-ec64-4d3a-856f-22c0a4b65d54/volumes" Feb 19 21:31:05 crc kubenswrapper[4886]: I0219 21:31:05.058342 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-4rq5v"] Feb 19 21:31:05 crc kubenswrapper[4886]: I0219 21:31:05.079774 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-4rq5v"] Feb 19 21:31:06 crc kubenswrapper[4886]: I0219 21:31:06.633934 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7" path="/var/lib/kubelet/pods/2aaaaad3-3f26-4db9-9bc5-c02d2ff136c7/volumes" Feb 19 21:31:14 crc kubenswrapper[4886]: I0219 21:31:14.050599 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-xmr6n"] Feb 19 21:31:14 crc kubenswrapper[4886]: I0219 21:31:14.068235 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-xmr6n"] Feb 19 21:31:14 crc kubenswrapper[4886]: I0219 21:31:14.626010 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83d1e69b-5951-43d6-a54b-c73956bb3356" path="/var/lib/kubelet/pods/83d1e69b-5951-43d6-a54b-c73956bb3356/volumes" Feb 19 21:31:16 crc kubenswrapper[4886]: I0219 21:31:16.044129 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-rlz62"] Feb 19 21:31:16 crc kubenswrapper[4886]: I0219 21:31:16.057866 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-rlz62"] Feb 19 21:31:16 crc kubenswrapper[4886]: I0219 21:31:16.625998 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="956c70ec-60b5-4909-b686-66971581b168" path="/var/lib/kubelet/pods/956c70ec-60b5-4909-b686-66971581b168/volumes" Feb 19 21:31:28 crc kubenswrapper[4886]: I0219 21:31:28.464532 4886 generic.go:334] "Generic (PLEG): container finished" podID="23a8dddf-44f9-4f14-9886-1eab785efe58" containerID="ddac24cd5e0e77aa0b9660d7589ead13bf4010527fda3e9ec71111aeed992436" exitCode=0 Feb 19 21:31:28 crc kubenswrapper[4886]: I0219 21:31:28.464752 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2" event={"ID":"23a8dddf-44f9-4f14-9886-1eab785efe58","Type":"ContainerDied","Data":"ddac24cd5e0e77aa0b9660d7589ead13bf4010527fda3e9ec71111aeed992436"} Feb 19 21:31:29 crc kubenswrapper[4886]: I0219 21:31:29.980694 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.160443 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxbhg\" (UniqueName: \"kubernetes.io/projected/23a8dddf-44f9-4f14-9886-1eab785efe58-kube-api-access-nxbhg\") pod \"23a8dddf-44f9-4f14-9886-1eab785efe58\" (UID: \"23a8dddf-44f9-4f14-9886-1eab785efe58\") " Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.161239 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23a8dddf-44f9-4f14-9886-1eab785efe58-inventory\") pod \"23a8dddf-44f9-4f14-9886-1eab785efe58\" (UID: \"23a8dddf-44f9-4f14-9886-1eab785efe58\") " Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.161999 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23a8dddf-44f9-4f14-9886-1eab785efe58-ssh-key-openstack-edpm-ipam\") pod \"23a8dddf-44f9-4f14-9886-1eab785efe58\" (UID: \"23a8dddf-44f9-4f14-9886-1eab785efe58\") " Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.167377 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23a8dddf-44f9-4f14-9886-1eab785efe58-kube-api-access-nxbhg" (OuterVolumeSpecName: "kube-api-access-nxbhg") pod "23a8dddf-44f9-4f14-9886-1eab785efe58" (UID: "23a8dddf-44f9-4f14-9886-1eab785efe58"). InnerVolumeSpecName "kube-api-access-nxbhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.208370 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23a8dddf-44f9-4f14-9886-1eab785efe58-inventory" (OuterVolumeSpecName: "inventory") pod "23a8dddf-44f9-4f14-9886-1eab785efe58" (UID: "23a8dddf-44f9-4f14-9886-1eab785efe58"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.212006 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23a8dddf-44f9-4f14-9886-1eab785efe58-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "23a8dddf-44f9-4f14-9886-1eab785efe58" (UID: "23a8dddf-44f9-4f14-9886-1eab785efe58"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.266415 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxbhg\" (UniqueName: \"kubernetes.io/projected/23a8dddf-44f9-4f14-9886-1eab785efe58-kube-api-access-nxbhg\") on node \"crc\" DevicePath \"\"" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.266462 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23a8dddf-44f9-4f14-9886-1eab785efe58-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.266484 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23a8dddf-44f9-4f14-9886-1eab785efe58-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.494443 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2" event={"ID":"23a8dddf-44f9-4f14-9886-1eab785efe58","Type":"ContainerDied","Data":"505fbcdfc33fa981116aa19ea9175818023c1de2d10e1e3a70beace44732f946"} Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.494495 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="505fbcdfc33fa981116aa19ea9175818023c1de2d10e1e3a70beace44732f946" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.494543 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpjn2" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.651821 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w"] Feb 19 21:31:30 crc kubenswrapper[4886]: E0219 21:31:30.652723 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2484eb25-a176-45d0-aa84-91ea87297c90" containerName="collect-profiles" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.652742 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2484eb25-a176-45d0-aa84-91ea87297c90" containerName="collect-profiles" Feb 19 21:31:30 crc kubenswrapper[4886]: E0219 21:31:30.652767 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23a8dddf-44f9-4f14-9886-1eab785efe58" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.652776 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="23a8dddf-44f9-4f14-9886-1eab785efe58" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.652965 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="23a8dddf-44f9-4f14-9886-1eab785efe58" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.652989 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2484eb25-a176-45d0-aa84-91ea87297c90" containerName="collect-profiles" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.654131 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w"] Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.654211 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.657021 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vq4ls" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.657312 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.657385 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.658828 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.782200 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p5jm\" (UniqueName: \"kubernetes.io/projected/00050abd-5de4-402f-957b-6546ffa044f4-kube-api-access-6p5jm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w\" (UID: \"00050abd-5de4-402f-957b-6546ffa044f4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.782372 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00050abd-5de4-402f-957b-6546ffa044f4-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w\" (UID: \"00050abd-5de4-402f-957b-6546ffa044f4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.782530 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00050abd-5de4-402f-957b-6546ffa044f4-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w\" (UID: \"00050abd-5de4-402f-957b-6546ffa044f4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.884900 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00050abd-5de4-402f-957b-6546ffa044f4-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w\" (UID: \"00050abd-5de4-402f-957b-6546ffa044f4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.885135 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00050abd-5de4-402f-957b-6546ffa044f4-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w\" (UID: \"00050abd-5de4-402f-957b-6546ffa044f4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.885331 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p5jm\" (UniqueName: \"kubernetes.io/projected/00050abd-5de4-402f-957b-6546ffa044f4-kube-api-access-6p5jm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w\" (UID: \"00050abd-5de4-402f-957b-6546ffa044f4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.892607 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00050abd-5de4-402f-957b-6546ffa044f4-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w\" (UID: \"00050abd-5de4-402f-957b-6546ffa044f4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.893904 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00050abd-5de4-402f-957b-6546ffa044f4-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w\" (UID: \"00050abd-5de4-402f-957b-6546ffa044f4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.909173 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p5jm\" (UniqueName: \"kubernetes.io/projected/00050abd-5de4-402f-957b-6546ffa044f4-kube-api-access-6p5jm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w\" (UID: \"00050abd-5de4-402f-957b-6546ffa044f4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w" Feb 19 21:31:30 crc kubenswrapper[4886]: I0219 21:31:30.979893 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w" Feb 19 21:31:31 crc kubenswrapper[4886]: I0219 21:31:31.621758 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w"] Feb 19 21:31:32 crc kubenswrapper[4886]: I0219 21:31:32.523037 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w" event={"ID":"00050abd-5de4-402f-957b-6546ffa044f4","Type":"ContainerStarted","Data":"ce7bf7ad6569e657e77a72ab0dd55fd8295ca74a69ea1e131dedc19bbe843994"} Feb 19 21:31:32 crc kubenswrapper[4886]: I0219 21:31:32.523515 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w" event={"ID":"00050abd-5de4-402f-957b-6546ffa044f4","Type":"ContainerStarted","Data":"8ff4b6a5a88f6ee7050d0785d95559df147fbb62433b4c1958de3a5308e63fe3"} Feb 19 21:31:32 crc kubenswrapper[4886]: I0219 21:31:32.554029 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w" podStartSLOduration=2.044109701 podStartE2EDuration="2.554002564s" podCreationTimestamp="2026-02-19 21:31:30 +0000 UTC" firstStartedPulling="2026-02-19 21:31:31.634808718 +0000 UTC m=+1922.262651788" lastFinishedPulling="2026-02-19 21:31:32.144701561 +0000 UTC m=+1922.772544651" observedRunningTime="2026-02-19 21:31:32.543476637 +0000 UTC m=+1923.171319697" watchObservedRunningTime="2026-02-19 21:31:32.554002564 +0000 UTC m=+1923.181845654" Feb 19 21:31:52 crc kubenswrapper[4886]: I0219 21:31:52.437019 4886 scope.go:117] "RemoveContainer" containerID="b8a380af2e5ea1588188c064c9d3c87c613dad353837f23ae5e1330043ae6da7" Feb 19 21:31:52 crc kubenswrapper[4886]: I0219 21:31:52.478731 4886 scope.go:117] "RemoveContainer" containerID="88a5a03c714ace1b80a0c8509ab70d7db48e0352d2fe33a0e4bc9c01d5881333" Feb 19 21:31:52 crc kubenswrapper[4886]: I0219 21:31:52.555655 4886 scope.go:117] "RemoveContainer" containerID="dac5b9d178df92c1d37ce46f555f44e5975dba4d9bfdfb41e8612fece6ecfe80" Feb 19 21:31:52 crc kubenswrapper[4886]: I0219 21:31:52.630197 4886 scope.go:117] "RemoveContainer" containerID="af9d7a7e495ccb7c28de240a71d66718e87158bb098489e6747b5fed3c18cc1f" Feb 19 21:32:18 crc kubenswrapper[4886]: I0219 21:32:18.324403 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:32:18 crc kubenswrapper[4886]: I0219 21:32:18.325042 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:32:22 crc kubenswrapper[4886]: I0219 21:32:22.051664 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-m9xgr"] Feb 19 21:32:22 crc kubenswrapper[4886]: I0219 21:32:22.063256 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cd2f-account-create-update-hcj7k"] Feb 19 21:32:22 crc kubenswrapper[4886]: I0219 21:32:22.075711 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cd2f-account-create-update-hcj7k"] Feb 19 21:32:22 crc kubenswrapper[4886]: I0219 21:32:22.084477 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-m9xgr"] Feb 19 21:32:22 crc kubenswrapper[4886]: I0219 21:32:22.616035 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6f6e153-c56c-4b3f-8c0c-457f163ac959" path="/var/lib/kubelet/pods/e6f6e153-c56c-4b3f-8c0c-457f163ac959/volumes" Feb 19 21:32:22 crc kubenswrapper[4886]: I0219 21:32:22.617599 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecb05e42-f78a-402a-9ea7-50fd859e9b29" path="/var/lib/kubelet/pods/ecb05e42-f78a-402a-9ea7-50fd859e9b29/volumes" Feb 19 21:32:23 crc kubenswrapper[4886]: I0219 21:32:23.041463 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-x9bqh"] Feb 19 21:32:23 crc kubenswrapper[4886]: I0219 21:32:23.051786 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-001b-account-create-update-56t7m"] Feb 19 21:32:23 crc kubenswrapper[4886]: I0219 21:32:23.063458 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-v4wd4"] Feb 19 21:32:23 crc kubenswrapper[4886]: I0219 21:32:23.075245 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-c948-account-create-update-hjlvn"] Feb 19 21:32:23 crc kubenswrapper[4886]: I0219 21:32:23.084958 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-x9bqh"] Feb 19 21:32:23 crc kubenswrapper[4886]: I0219 21:32:23.094609 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-v4wd4"] Feb 19 21:32:23 crc kubenswrapper[4886]: I0219 21:32:23.104814 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-001b-account-create-update-56t7m"] Feb 19 21:32:23 crc kubenswrapper[4886]: I0219 21:32:23.114936 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-c948-account-create-update-hjlvn"] Feb 19 21:32:24 crc kubenswrapper[4886]: I0219 21:32:24.614105 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11a70275-beb0-4c27-8e45-b8a8143c34a2" path="/var/lib/kubelet/pods/11a70275-beb0-4c27-8e45-b8a8143c34a2/volumes" Feb 19 21:32:24 crc kubenswrapper[4886]: I0219 21:32:24.615163 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23217e5b-f7df-460d-abd9-ada19eeb839a" path="/var/lib/kubelet/pods/23217e5b-f7df-460d-abd9-ada19eeb839a/volumes" Feb 19 21:32:24 crc kubenswrapper[4886]: I0219 21:32:24.616042 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46ceffae-e399-457a-80fe-152a1a143641" path="/var/lib/kubelet/pods/46ceffae-e399-457a-80fe-152a1a143641/volumes" Feb 19 21:32:24 crc kubenswrapper[4886]: I0219 21:32:24.616834 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf75e0d9-f5a5-47ab-af09-0d0321b1cacc" path="/var/lib/kubelet/pods/cf75e0d9-f5a5-47ab-af09-0d0321b1cacc/volumes" Feb 19 21:32:43 crc kubenswrapper[4886]: I0219 21:32:43.460722 4886 generic.go:334] "Generic (PLEG): container finished" podID="00050abd-5de4-402f-957b-6546ffa044f4" containerID="ce7bf7ad6569e657e77a72ab0dd55fd8295ca74a69ea1e131dedc19bbe843994" exitCode=0 Feb 19 21:32:43 crc kubenswrapper[4886]: I0219 21:32:43.460780 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w" event={"ID":"00050abd-5de4-402f-957b-6546ffa044f4","Type":"ContainerDied","Data":"ce7bf7ad6569e657e77a72ab0dd55fd8295ca74a69ea1e131dedc19bbe843994"} Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.082780 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.151289 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00050abd-5de4-402f-957b-6546ffa044f4-ssh-key-openstack-edpm-ipam\") pod \"00050abd-5de4-402f-957b-6546ffa044f4\" (UID: \"00050abd-5de4-402f-957b-6546ffa044f4\") " Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.151477 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00050abd-5de4-402f-957b-6546ffa044f4-inventory\") pod \"00050abd-5de4-402f-957b-6546ffa044f4\" (UID: \"00050abd-5de4-402f-957b-6546ffa044f4\") " Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.151565 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6p5jm\" (UniqueName: \"kubernetes.io/projected/00050abd-5de4-402f-957b-6546ffa044f4-kube-api-access-6p5jm\") pod \"00050abd-5de4-402f-957b-6546ffa044f4\" (UID: \"00050abd-5de4-402f-957b-6546ffa044f4\") " Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.159948 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00050abd-5de4-402f-957b-6546ffa044f4-kube-api-access-6p5jm" (OuterVolumeSpecName: "kube-api-access-6p5jm") pod "00050abd-5de4-402f-957b-6546ffa044f4" (UID: "00050abd-5de4-402f-957b-6546ffa044f4"). InnerVolumeSpecName "kube-api-access-6p5jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.205774 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00050abd-5de4-402f-957b-6546ffa044f4-inventory" (OuterVolumeSpecName: "inventory") pod "00050abd-5de4-402f-957b-6546ffa044f4" (UID: "00050abd-5de4-402f-957b-6546ffa044f4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.210626 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00050abd-5de4-402f-957b-6546ffa044f4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "00050abd-5de4-402f-957b-6546ffa044f4" (UID: "00050abd-5de4-402f-957b-6546ffa044f4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.254506 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00050abd-5de4-402f-957b-6546ffa044f4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.254558 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00050abd-5de4-402f-957b-6546ffa044f4-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.254578 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6p5jm\" (UniqueName: \"kubernetes.io/projected/00050abd-5de4-402f-957b-6546ffa044f4-kube-api-access-6p5jm\") on node \"crc\" DevicePath \"\"" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.483673 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w" event={"ID":"00050abd-5de4-402f-957b-6546ffa044f4","Type":"ContainerDied","Data":"8ff4b6a5a88f6ee7050d0785d95559df147fbb62433b4c1958de3a5308e63fe3"} Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.484059 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ff4b6a5a88f6ee7050d0785d95559df147fbb62433b4c1958de3a5308e63fe3" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.483748 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z6r4w" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.618428 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq"] Feb 19 21:32:45 crc kubenswrapper[4886]: E0219 21:32:45.618965 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00050abd-5de4-402f-957b-6546ffa044f4" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.618981 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="00050abd-5de4-402f-957b-6546ffa044f4" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.619233 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="00050abd-5de4-402f-957b-6546ffa044f4" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.620093 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.624306 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.624555 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.624703 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vq4ls" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.624847 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.647134 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq"] Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.663312 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d628c755-5347-4aaf-b233-47e28ccf1138-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq\" (UID: \"d628c755-5347-4aaf-b233-47e28ccf1138\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.663522 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2xxt\" (UniqueName: \"kubernetes.io/projected/d628c755-5347-4aaf-b233-47e28ccf1138-kube-api-access-p2xxt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq\" (UID: \"d628c755-5347-4aaf-b233-47e28ccf1138\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.663557 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d628c755-5347-4aaf-b233-47e28ccf1138-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq\" (UID: \"d628c755-5347-4aaf-b233-47e28ccf1138\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.765930 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d628c755-5347-4aaf-b233-47e28ccf1138-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq\" (UID: \"d628c755-5347-4aaf-b233-47e28ccf1138\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.766044 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2xxt\" (UniqueName: \"kubernetes.io/projected/d628c755-5347-4aaf-b233-47e28ccf1138-kube-api-access-p2xxt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq\" (UID: \"d628c755-5347-4aaf-b233-47e28ccf1138\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.766080 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d628c755-5347-4aaf-b233-47e28ccf1138-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq\" (UID: \"d628c755-5347-4aaf-b233-47e28ccf1138\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.770172 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d628c755-5347-4aaf-b233-47e28ccf1138-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq\" (UID: \"d628c755-5347-4aaf-b233-47e28ccf1138\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.771032 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d628c755-5347-4aaf-b233-47e28ccf1138-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq\" (UID: \"d628c755-5347-4aaf-b233-47e28ccf1138\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.784157 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2xxt\" (UniqueName: \"kubernetes.io/projected/d628c755-5347-4aaf-b233-47e28ccf1138-kube-api-access-p2xxt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq\" (UID: \"d628c755-5347-4aaf-b233-47e28ccf1138\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq" Feb 19 21:32:45 crc kubenswrapper[4886]: I0219 21:32:45.952308 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq" Feb 19 21:32:46 crc kubenswrapper[4886]: I0219 21:32:46.622158 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq"] Feb 19 21:32:47 crc kubenswrapper[4886]: I0219 21:32:47.508840 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq" event={"ID":"d628c755-5347-4aaf-b233-47e28ccf1138","Type":"ContainerStarted","Data":"fa7c8c3dd7a639aec32b53ea48a641e9165d1c5947c0d8194d8a2dbdabff7f49"} Feb 19 21:32:47 crc kubenswrapper[4886]: I0219 21:32:47.509291 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq" event={"ID":"d628c755-5347-4aaf-b233-47e28ccf1138","Type":"ContainerStarted","Data":"e08754c62bf4a81f22fe485b196ac3dbca1de8a2c30c0e75ee101f60ed3f2d16"} Feb 19 21:32:47 crc kubenswrapper[4886]: I0219 21:32:47.527672 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq" podStartSLOduration=2.069130814 podStartE2EDuration="2.527657906s" podCreationTimestamp="2026-02-19 21:32:45 +0000 UTC" firstStartedPulling="2026-02-19 21:32:46.617727916 +0000 UTC m=+1997.245570966" lastFinishedPulling="2026-02-19 21:32:47.076255008 +0000 UTC m=+1997.704098058" observedRunningTime="2026-02-19 21:32:47.524156251 +0000 UTC m=+1998.151999301" watchObservedRunningTime="2026-02-19 21:32:47.527657906 +0000 UTC m=+1998.155500956" Feb 19 21:32:48 crc kubenswrapper[4886]: I0219 21:32:48.324563 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:32:48 crc kubenswrapper[4886]: I0219 21:32:48.324878 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:32:52 crc kubenswrapper[4886]: I0219 21:32:52.791541 4886 scope.go:117] "RemoveContainer" containerID="08cfb6081d81afa39dd546464d2c4bbb5cdb2860da1857f50ad0174c862880de" Feb 19 21:32:52 crc kubenswrapper[4886]: I0219 21:32:52.958092 4886 scope.go:117] "RemoveContainer" containerID="28427be5b5061da3c3044a6d62cd6d591a82f89ddabc6cddbb075469c3b930b9" Feb 19 21:32:53 crc kubenswrapper[4886]: I0219 21:32:53.027320 4886 scope.go:117] "RemoveContainer" containerID="f2e2c96fe32633879434267721c95e6e9e266c966ac8aaf6af01d57ba935d6f4" Feb 19 21:32:53 crc kubenswrapper[4886]: I0219 21:32:53.062905 4886 scope.go:117] "RemoveContainer" containerID="99ce8f3f4f40db4ba0fbe5e35bd58722d8c97e35dab2e5e129897bceea565cc1" Feb 19 21:32:53 crc kubenswrapper[4886]: I0219 21:32:53.120830 4886 scope.go:117] "RemoveContainer" containerID="1cbcd4b8bd1b338c178e47c76cbf8f4f69c2fe4cfca926a4e8a5e7b3d2882c7d" Feb 19 21:32:53 crc kubenswrapper[4886]: I0219 21:32:53.212622 4886 scope.go:117] "RemoveContainer" containerID="18a62e68cec3f9cb438b5d66671e334c17d127a4d18f2920820b41c6370d8cdb" Feb 19 21:32:53 crc kubenswrapper[4886]: I0219 21:32:53.592371 4886 generic.go:334] "Generic (PLEG): container finished" podID="d628c755-5347-4aaf-b233-47e28ccf1138" containerID="fa7c8c3dd7a639aec32b53ea48a641e9165d1c5947c0d8194d8a2dbdabff7f49" exitCode=0 Feb 19 21:32:53 crc kubenswrapper[4886]: I0219 21:32:53.592496 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq" event={"ID":"d628c755-5347-4aaf-b233-47e28ccf1138","Type":"ContainerDied","Data":"fa7c8c3dd7a639aec32b53ea48a641e9165d1c5947c0d8194d8a2dbdabff7f49"} Feb 19 21:32:54 crc kubenswrapper[4886]: I0219 21:32:54.056997 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bp48t"] Feb 19 21:32:54 crc kubenswrapper[4886]: I0219 21:32:54.068535 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bp48t"] Feb 19 21:32:54 crc kubenswrapper[4886]: I0219 21:32:54.639777 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e60f29c-ef3b-4733-a6d8-92cd74e10eac" path="/var/lib/kubelet/pods/5e60f29c-ef3b-4733-a6d8-92cd74e10eac/volumes" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.184483 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.231050 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2xxt\" (UniqueName: \"kubernetes.io/projected/d628c755-5347-4aaf-b233-47e28ccf1138-kube-api-access-p2xxt\") pod \"d628c755-5347-4aaf-b233-47e28ccf1138\" (UID: \"d628c755-5347-4aaf-b233-47e28ccf1138\") " Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.231212 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d628c755-5347-4aaf-b233-47e28ccf1138-inventory\") pod \"d628c755-5347-4aaf-b233-47e28ccf1138\" (UID: \"d628c755-5347-4aaf-b233-47e28ccf1138\") " Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.231444 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d628c755-5347-4aaf-b233-47e28ccf1138-ssh-key-openstack-edpm-ipam\") pod \"d628c755-5347-4aaf-b233-47e28ccf1138\" (UID: \"d628c755-5347-4aaf-b233-47e28ccf1138\") " Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.244473 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d628c755-5347-4aaf-b233-47e28ccf1138-kube-api-access-p2xxt" (OuterVolumeSpecName: "kube-api-access-p2xxt") pod "d628c755-5347-4aaf-b233-47e28ccf1138" (UID: "d628c755-5347-4aaf-b233-47e28ccf1138"). InnerVolumeSpecName "kube-api-access-p2xxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.281736 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d628c755-5347-4aaf-b233-47e28ccf1138-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d628c755-5347-4aaf-b233-47e28ccf1138" (UID: "d628c755-5347-4aaf-b233-47e28ccf1138"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.290090 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d628c755-5347-4aaf-b233-47e28ccf1138-inventory" (OuterVolumeSpecName: "inventory") pod "d628c755-5347-4aaf-b233-47e28ccf1138" (UID: "d628c755-5347-4aaf-b233-47e28ccf1138"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.334277 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2xxt\" (UniqueName: \"kubernetes.io/projected/d628c755-5347-4aaf-b233-47e28ccf1138-kube-api-access-p2xxt\") on node \"crc\" DevicePath \"\"" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.334317 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d628c755-5347-4aaf-b233-47e28ccf1138-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.334333 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d628c755-5347-4aaf-b233-47e28ccf1138-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.620590 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq" event={"ID":"d628c755-5347-4aaf-b233-47e28ccf1138","Type":"ContainerDied","Data":"e08754c62bf4a81f22fe485b196ac3dbca1de8a2c30c0e75ee101f60ed3f2d16"} Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.620639 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e08754c62bf4a81f22fe485b196ac3dbca1de8a2c30c0e75ee101f60ed3f2d16" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.620652 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6pvbq" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.689416 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-jft96"] Feb 19 21:32:55 crc kubenswrapper[4886]: E0219 21:32:55.689915 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d628c755-5347-4aaf-b233-47e28ccf1138" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.689934 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d628c755-5347-4aaf-b233-47e28ccf1138" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.690168 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d628c755-5347-4aaf-b233-47e28ccf1138" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.690958 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jft96" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.696371 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.696371 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.696536 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.696540 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vq4ls" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.707615 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-jft96"] Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.850906 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/518ae7f9-56f0-4a14-90a3-22873d37fa23-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jft96\" (UID: \"518ae7f9-56f0-4a14-90a3-22873d37fa23\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jft96" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.850993 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/518ae7f9-56f0-4a14-90a3-22873d37fa23-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jft96\" (UID: \"518ae7f9-56f0-4a14-90a3-22873d37fa23\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jft96" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.851085 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvlk2\" (UniqueName: \"kubernetes.io/projected/518ae7f9-56f0-4a14-90a3-22873d37fa23-kube-api-access-qvlk2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jft96\" (UID: \"518ae7f9-56f0-4a14-90a3-22873d37fa23\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jft96" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.955405 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/518ae7f9-56f0-4a14-90a3-22873d37fa23-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jft96\" (UID: \"518ae7f9-56f0-4a14-90a3-22873d37fa23\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jft96" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.955534 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/518ae7f9-56f0-4a14-90a3-22873d37fa23-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jft96\" (UID: \"518ae7f9-56f0-4a14-90a3-22873d37fa23\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jft96" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.955663 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvlk2\" (UniqueName: \"kubernetes.io/projected/518ae7f9-56f0-4a14-90a3-22873d37fa23-kube-api-access-qvlk2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jft96\" (UID: \"518ae7f9-56f0-4a14-90a3-22873d37fa23\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jft96" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.962158 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/518ae7f9-56f0-4a14-90a3-22873d37fa23-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jft96\" (UID: \"518ae7f9-56f0-4a14-90a3-22873d37fa23\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jft96" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.964154 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/518ae7f9-56f0-4a14-90a3-22873d37fa23-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jft96\" (UID: \"518ae7f9-56f0-4a14-90a3-22873d37fa23\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jft96" Feb 19 21:32:55 crc kubenswrapper[4886]: I0219 21:32:55.985727 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvlk2\" (UniqueName: \"kubernetes.io/projected/518ae7f9-56f0-4a14-90a3-22873d37fa23-kube-api-access-qvlk2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jft96\" (UID: \"518ae7f9-56f0-4a14-90a3-22873d37fa23\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jft96" Feb 19 21:32:56 crc kubenswrapper[4886]: I0219 21:32:56.025513 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jft96" Feb 19 21:32:56 crc kubenswrapper[4886]: I0219 21:32:56.691891 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-jft96"] Feb 19 21:32:57 crc kubenswrapper[4886]: I0219 21:32:57.030878 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-8xp7p"] Feb 19 21:32:57 crc kubenswrapper[4886]: I0219 21:32:57.041883 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-8xp7p"] Feb 19 21:32:57 crc kubenswrapper[4886]: I0219 21:32:57.644242 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jft96" event={"ID":"518ae7f9-56f0-4a14-90a3-22873d37fa23","Type":"ContainerStarted","Data":"1e41861fce64aff5234e0ac856540cb007ab75de1a3e68e85e6745fadea782e7"} Feb 19 21:32:57 crc kubenswrapper[4886]: I0219 21:32:57.644872 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jft96" event={"ID":"518ae7f9-56f0-4a14-90a3-22873d37fa23","Type":"ContainerStarted","Data":"6e1f31fe6cd2531917300b7d1d64c3739ce7b330b3423537b387bee6f73fce65"} Feb 19 21:32:57 crc kubenswrapper[4886]: I0219 21:32:57.675009 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jft96" podStartSLOduration=2.253812813 podStartE2EDuration="2.674986216s" podCreationTimestamp="2026-02-19 21:32:55 +0000 UTC" firstStartedPulling="2026-02-19 21:32:56.696580985 +0000 UTC m=+2007.324424035" lastFinishedPulling="2026-02-19 21:32:57.117754348 +0000 UTC m=+2007.745597438" observedRunningTime="2026-02-19 21:32:57.662354753 +0000 UTC m=+2008.290197813" watchObservedRunningTime="2026-02-19 21:32:57.674986216 +0000 UTC m=+2008.302829276" Feb 19 21:32:58 crc kubenswrapper[4886]: I0219 21:32:58.043549 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-3436-account-create-update-kk7fg"] Feb 19 21:32:58 crc kubenswrapper[4886]: I0219 21:32:58.057039 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-3436-account-create-update-kk7fg"] Feb 19 21:32:58 crc kubenswrapper[4886]: I0219 21:32:58.620709 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be6627da-95f2-48b8-ba42-eae7018d98b5" path="/var/lib/kubelet/pods/be6627da-95f2-48b8-ba42-eae7018d98b5/volumes" Feb 19 21:32:58 crc kubenswrapper[4886]: I0219 21:32:58.621988 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffed57be-9be1-478b-b43b-c6d67de8630c" path="/var/lib/kubelet/pods/ffed57be-9be1-478b-b43b-c6d67de8630c/volumes" Feb 19 21:33:18 crc kubenswrapper[4886]: I0219 21:33:18.324678 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:33:18 crc kubenswrapper[4886]: I0219 21:33:18.325349 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:33:18 crc kubenswrapper[4886]: I0219 21:33:18.325400 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 21:33:18 crc kubenswrapper[4886]: I0219 21:33:18.326378 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"35564522d2e7778839438b05f01b84ea628562673e5c8941499cfff7719ef457"} pod="openshift-machine-config-operator/machine-config-daemon-6stm5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 21:33:18 crc kubenswrapper[4886]: I0219 21:33:18.326554 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" containerID="cri-o://35564522d2e7778839438b05f01b84ea628562673e5c8941499cfff7719ef457" gracePeriod=600 Feb 19 21:33:18 crc kubenswrapper[4886]: I0219 21:33:18.944376 4886 generic.go:334] "Generic (PLEG): container finished" podID="b096c32d-4192-4529-bc55-b05d09004007" containerID="35564522d2e7778839438b05f01b84ea628562673e5c8941499cfff7719ef457" exitCode=0 Feb 19 21:33:18 crc kubenswrapper[4886]: I0219 21:33:18.944799 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerDied","Data":"35564522d2e7778839438b05f01b84ea628562673e5c8941499cfff7719ef457"} Feb 19 21:33:18 crc kubenswrapper[4886]: I0219 21:33:18.944835 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerStarted","Data":"25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795"} Feb 19 21:33:18 crc kubenswrapper[4886]: I0219 21:33:18.944855 4886 scope.go:117] "RemoveContainer" containerID="3abc6cceab388c835dfc54651af96499339633c633cb1a91be256d1fa2c6f19c" Feb 19 21:33:20 crc kubenswrapper[4886]: I0219 21:33:20.054151 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-2dzvs"] Feb 19 21:33:20 crc kubenswrapper[4886]: I0219 21:33:20.069760 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-2dzvs"] Feb 19 21:33:20 crc kubenswrapper[4886]: I0219 21:33:20.630137 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f6d74fb-e91d-4838-ab58-90aa48b0bc8c" path="/var/lib/kubelet/pods/2f6d74fb-e91d-4838-ab58-90aa48b0bc8c/volumes" Feb 19 21:33:34 crc kubenswrapper[4886]: I0219 21:33:34.076928 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9c97q"] Feb 19 21:33:34 crc kubenswrapper[4886]: I0219 21:33:34.092637 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9c97q"] Feb 19 21:33:34 crc kubenswrapper[4886]: I0219 21:33:34.620070 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dd3f102-b16f-4118-a32e-64eda5ae8047" path="/var/lib/kubelet/pods/9dd3f102-b16f-4118-a32e-64eda5ae8047/volumes" Feb 19 21:33:35 crc kubenswrapper[4886]: I0219 21:33:35.153351 4886 generic.go:334] "Generic (PLEG): container finished" podID="518ae7f9-56f0-4a14-90a3-22873d37fa23" containerID="1e41861fce64aff5234e0ac856540cb007ab75de1a3e68e85e6745fadea782e7" exitCode=0 Feb 19 21:33:35 crc kubenswrapper[4886]: I0219 21:33:35.153478 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jft96" event={"ID":"518ae7f9-56f0-4a14-90a3-22873d37fa23","Type":"ContainerDied","Data":"1e41861fce64aff5234e0ac856540cb007ab75de1a3e68e85e6745fadea782e7"} Feb 19 21:33:36 crc kubenswrapper[4886]: I0219 21:33:36.720973 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jft96" Feb 19 21:33:36 crc kubenswrapper[4886]: I0219 21:33:36.856711 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/518ae7f9-56f0-4a14-90a3-22873d37fa23-inventory\") pod \"518ae7f9-56f0-4a14-90a3-22873d37fa23\" (UID: \"518ae7f9-56f0-4a14-90a3-22873d37fa23\") " Feb 19 21:33:36 crc kubenswrapper[4886]: I0219 21:33:36.856837 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/518ae7f9-56f0-4a14-90a3-22873d37fa23-ssh-key-openstack-edpm-ipam\") pod \"518ae7f9-56f0-4a14-90a3-22873d37fa23\" (UID: \"518ae7f9-56f0-4a14-90a3-22873d37fa23\") " Feb 19 21:33:36 crc kubenswrapper[4886]: I0219 21:33:36.857210 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvlk2\" (UniqueName: \"kubernetes.io/projected/518ae7f9-56f0-4a14-90a3-22873d37fa23-kube-api-access-qvlk2\") pod \"518ae7f9-56f0-4a14-90a3-22873d37fa23\" (UID: \"518ae7f9-56f0-4a14-90a3-22873d37fa23\") " Feb 19 21:33:36 crc kubenswrapper[4886]: I0219 21:33:36.862806 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/518ae7f9-56f0-4a14-90a3-22873d37fa23-kube-api-access-qvlk2" (OuterVolumeSpecName: "kube-api-access-qvlk2") pod "518ae7f9-56f0-4a14-90a3-22873d37fa23" (UID: "518ae7f9-56f0-4a14-90a3-22873d37fa23"). InnerVolumeSpecName "kube-api-access-qvlk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:33:36 crc kubenswrapper[4886]: I0219 21:33:36.890940 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/518ae7f9-56f0-4a14-90a3-22873d37fa23-inventory" (OuterVolumeSpecName: "inventory") pod "518ae7f9-56f0-4a14-90a3-22873d37fa23" (UID: "518ae7f9-56f0-4a14-90a3-22873d37fa23"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:33:36 crc kubenswrapper[4886]: I0219 21:33:36.919535 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/518ae7f9-56f0-4a14-90a3-22873d37fa23-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "518ae7f9-56f0-4a14-90a3-22873d37fa23" (UID: "518ae7f9-56f0-4a14-90a3-22873d37fa23"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:33:36 crc kubenswrapper[4886]: I0219 21:33:36.959500 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvlk2\" (UniqueName: \"kubernetes.io/projected/518ae7f9-56f0-4a14-90a3-22873d37fa23-kube-api-access-qvlk2\") on node \"crc\" DevicePath \"\"" Feb 19 21:33:36 crc kubenswrapper[4886]: I0219 21:33:36.959540 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/518ae7f9-56f0-4a14-90a3-22873d37fa23-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 21:33:36 crc kubenswrapper[4886]: I0219 21:33:36.959549 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/518ae7f9-56f0-4a14-90a3-22873d37fa23-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 21:33:37 crc kubenswrapper[4886]: I0219 21:33:37.216509 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jft96" event={"ID":"518ae7f9-56f0-4a14-90a3-22873d37fa23","Type":"ContainerDied","Data":"6e1f31fe6cd2531917300b7d1d64c3739ce7b330b3423537b387bee6f73fce65"} Feb 19 21:33:37 crc kubenswrapper[4886]: I0219 21:33:37.216569 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e1f31fe6cd2531917300b7d1d64c3739ce7b330b3423537b387bee6f73fce65" Feb 19 21:33:37 crc kubenswrapper[4886]: I0219 21:33:37.216578 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jft96" Feb 19 21:33:37 crc kubenswrapper[4886]: I0219 21:33:37.265900 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t"] Feb 19 21:33:37 crc kubenswrapper[4886]: E0219 21:33:37.266687 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="518ae7f9-56f0-4a14-90a3-22873d37fa23" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 21:33:37 crc kubenswrapper[4886]: I0219 21:33:37.266763 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="518ae7f9-56f0-4a14-90a3-22873d37fa23" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 21:33:37 crc kubenswrapper[4886]: I0219 21:33:37.267018 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="518ae7f9-56f0-4a14-90a3-22873d37fa23" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 21:33:37 crc kubenswrapper[4886]: I0219 21:33:37.267867 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t" Feb 19 21:33:37 crc kubenswrapper[4886]: I0219 21:33:37.274355 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 21:33:37 crc kubenswrapper[4886]: I0219 21:33:37.274487 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vq4ls" Feb 19 21:33:37 crc kubenswrapper[4886]: I0219 21:33:37.274605 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 21:33:37 crc kubenswrapper[4886]: I0219 21:33:37.274796 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 21:33:37 crc kubenswrapper[4886]: I0219 21:33:37.277542 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t"] Feb 19 21:33:37 crc kubenswrapper[4886]: I0219 21:33:37.366708 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfzjd\" (UniqueName: \"kubernetes.io/projected/0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7-kube-api-access-nfzjd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t\" (UID: \"0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t" Feb 19 21:33:37 crc kubenswrapper[4886]: I0219 21:33:37.366853 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t\" (UID: \"0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t" Feb 19 21:33:37 crc kubenswrapper[4886]: I0219 21:33:37.367025 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t\" (UID: \"0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t" Feb 19 21:33:37 crc kubenswrapper[4886]: I0219 21:33:37.470010 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t\" (UID: \"0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t" Feb 19 21:33:37 crc kubenswrapper[4886]: I0219 21:33:37.470254 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfzjd\" (UniqueName: \"kubernetes.io/projected/0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7-kube-api-access-nfzjd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t\" (UID: \"0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t" Feb 19 21:33:37 crc kubenswrapper[4886]: I0219 21:33:37.470508 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t\" (UID: \"0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t" Feb 19 21:33:37 crc kubenswrapper[4886]: I0219 21:33:37.474956 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t\" (UID: \"0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t" Feb 19 21:33:37 crc kubenswrapper[4886]: I0219 21:33:37.475451 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t\" (UID: \"0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t" Feb 19 21:33:37 crc kubenswrapper[4886]: I0219 21:33:37.491246 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfzjd\" (UniqueName: \"kubernetes.io/projected/0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7-kube-api-access-nfzjd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t\" (UID: \"0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t" Feb 19 21:33:37 crc kubenswrapper[4886]: I0219 21:33:37.586469 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t" Feb 19 21:33:38 crc kubenswrapper[4886]: W0219 21:33:38.207006 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ffa8a2f_48e1_4ffb_a2cc_7626d8b464b7.slice/crio-2de9b114a71b50f72b9b1fd2625232ceacb2ccad74585c4ec9ecd61992baebf4 WatchSource:0}: Error finding container 2de9b114a71b50f72b9b1fd2625232ceacb2ccad74585c4ec9ecd61992baebf4: Status 404 returned error can't find the container with id 2de9b114a71b50f72b9b1fd2625232ceacb2ccad74585c4ec9ecd61992baebf4 Feb 19 21:33:38 crc kubenswrapper[4886]: I0219 21:33:38.210233 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t"] Feb 19 21:33:38 crc kubenswrapper[4886]: I0219 21:33:38.227156 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t" event={"ID":"0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7","Type":"ContainerStarted","Data":"2de9b114a71b50f72b9b1fd2625232ceacb2ccad74585c4ec9ecd61992baebf4"} Feb 19 21:33:39 crc kubenswrapper[4886]: I0219 21:33:39.252100 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t" event={"ID":"0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7","Type":"ContainerStarted","Data":"716087b1a186fd25deda15fc21273aebf55b1092387f4228a42bc87e33c02e93"} Feb 19 21:33:39 crc kubenswrapper[4886]: I0219 21:33:39.292989 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t" podStartSLOduration=1.854652398 podStartE2EDuration="2.292969926s" podCreationTimestamp="2026-02-19 21:33:37 +0000 UTC" firstStartedPulling="2026-02-19 21:33:38.21000409 +0000 UTC m=+2048.837847140" lastFinishedPulling="2026-02-19 21:33:38.648321618 +0000 UTC m=+2049.276164668" observedRunningTime="2026-02-19 21:33:39.273088944 +0000 UTC m=+2049.900932024" watchObservedRunningTime="2026-02-19 21:33:39.292969926 +0000 UTC m=+2049.920812976" Feb 19 21:33:53 crc kubenswrapper[4886]: I0219 21:33:53.360154 4886 scope.go:117] "RemoveContainer" containerID="f675f9558378fc11064443b53f590805e586c832a9a331c6160a1b849d235cfe" Feb 19 21:33:53 crc kubenswrapper[4886]: I0219 21:33:53.421544 4886 scope.go:117] "RemoveContainer" containerID="17256ff59e07d508420c8f2ea37f815b84e92bbe34dbd4a000b8ccdbe63cdfdd" Feb 19 21:33:53 crc kubenswrapper[4886]: I0219 21:33:53.485490 4886 scope.go:117] "RemoveContainer" containerID="06077776d05e160695a0e955fdec2b08900c1467c2e40c59483a3e97149a8fb0" Feb 19 21:33:53 crc kubenswrapper[4886]: I0219 21:33:53.540007 4886 scope.go:117] "RemoveContainer" containerID="17b9c0e1a0dcac51923f8d0a5389084c0b49fce73aa5a2ec476b75ce30de53ff" Feb 19 21:33:53 crc kubenswrapper[4886]: I0219 21:33:53.599020 4886 scope.go:117] "RemoveContainer" containerID="bbc1f0f3c2251812647c9ba8c2c1c9d006361376576f8dbf661f426aa2f679e9" Feb 19 21:33:56 crc kubenswrapper[4886]: I0219 21:33:56.780515 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qrxv8"] Feb 19 21:33:56 crc kubenswrapper[4886]: I0219 21:33:56.789572 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrxv8" Feb 19 21:33:56 crc kubenswrapper[4886]: I0219 21:33:56.843970 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qrxv8"] Feb 19 21:33:56 crc kubenswrapper[4886]: I0219 21:33:56.962297 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/599ea19f-0643-4915-9114-d38a38e92e6e-catalog-content\") pod \"redhat-operators-qrxv8\" (UID: \"599ea19f-0643-4915-9114-d38a38e92e6e\") " pod="openshift-marketplace/redhat-operators-qrxv8" Feb 19 21:33:56 crc kubenswrapper[4886]: I0219 21:33:56.962356 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5vhd\" (UniqueName: \"kubernetes.io/projected/599ea19f-0643-4915-9114-d38a38e92e6e-kube-api-access-f5vhd\") pod \"redhat-operators-qrxv8\" (UID: \"599ea19f-0643-4915-9114-d38a38e92e6e\") " pod="openshift-marketplace/redhat-operators-qrxv8" Feb 19 21:33:56 crc kubenswrapper[4886]: I0219 21:33:56.962585 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/599ea19f-0643-4915-9114-d38a38e92e6e-utilities\") pod \"redhat-operators-qrxv8\" (UID: \"599ea19f-0643-4915-9114-d38a38e92e6e\") " pod="openshift-marketplace/redhat-operators-qrxv8" Feb 19 21:33:57 crc kubenswrapper[4886]: I0219 21:33:57.064699 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/599ea19f-0643-4915-9114-d38a38e92e6e-utilities\") pod \"redhat-operators-qrxv8\" (UID: \"599ea19f-0643-4915-9114-d38a38e92e6e\") " pod="openshift-marketplace/redhat-operators-qrxv8" Feb 19 21:33:57 crc kubenswrapper[4886]: I0219 21:33:57.064828 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/599ea19f-0643-4915-9114-d38a38e92e6e-catalog-content\") pod \"redhat-operators-qrxv8\" (UID: \"599ea19f-0643-4915-9114-d38a38e92e6e\") " pod="openshift-marketplace/redhat-operators-qrxv8" Feb 19 21:33:57 crc kubenswrapper[4886]: I0219 21:33:57.064853 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5vhd\" (UniqueName: \"kubernetes.io/projected/599ea19f-0643-4915-9114-d38a38e92e6e-kube-api-access-f5vhd\") pod \"redhat-operators-qrxv8\" (UID: \"599ea19f-0643-4915-9114-d38a38e92e6e\") " pod="openshift-marketplace/redhat-operators-qrxv8" Feb 19 21:33:57 crc kubenswrapper[4886]: I0219 21:33:57.065171 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/599ea19f-0643-4915-9114-d38a38e92e6e-utilities\") pod \"redhat-operators-qrxv8\" (UID: \"599ea19f-0643-4915-9114-d38a38e92e6e\") " pod="openshift-marketplace/redhat-operators-qrxv8" Feb 19 21:33:57 crc kubenswrapper[4886]: I0219 21:33:57.065629 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/599ea19f-0643-4915-9114-d38a38e92e6e-catalog-content\") pod \"redhat-operators-qrxv8\" (UID: \"599ea19f-0643-4915-9114-d38a38e92e6e\") " pod="openshift-marketplace/redhat-operators-qrxv8" Feb 19 21:33:57 crc kubenswrapper[4886]: I0219 21:33:57.104504 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5vhd\" (UniqueName: \"kubernetes.io/projected/599ea19f-0643-4915-9114-d38a38e92e6e-kube-api-access-f5vhd\") pod \"redhat-operators-qrxv8\" (UID: \"599ea19f-0643-4915-9114-d38a38e92e6e\") " pod="openshift-marketplace/redhat-operators-qrxv8" Feb 19 21:33:57 crc kubenswrapper[4886]: I0219 21:33:57.136851 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrxv8" Feb 19 21:33:57 crc kubenswrapper[4886]: W0219 21:33:57.719672 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod599ea19f_0643_4915_9114_d38a38e92e6e.slice/crio-514772cd9bdc7b67588c9b9ab4de1e04a3552a21bbff6ebe8557907584461711 WatchSource:0}: Error finding container 514772cd9bdc7b67588c9b9ab4de1e04a3552a21bbff6ebe8557907584461711: Status 404 returned error can't find the container with id 514772cd9bdc7b67588c9b9ab4de1e04a3552a21bbff6ebe8557907584461711 Feb 19 21:33:57 crc kubenswrapper[4886]: I0219 21:33:57.720710 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qrxv8"] Feb 19 21:33:58 crc kubenswrapper[4886]: I0219 21:33:58.499335 4886 generic.go:334] "Generic (PLEG): container finished" podID="599ea19f-0643-4915-9114-d38a38e92e6e" containerID="e7702cae3f6f0bcf2d97b422bcd99072882d1189e0446db93ada85257fbc0079" exitCode=0 Feb 19 21:33:58 crc kubenswrapper[4886]: I0219 21:33:58.499387 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrxv8" event={"ID":"599ea19f-0643-4915-9114-d38a38e92e6e","Type":"ContainerDied","Data":"e7702cae3f6f0bcf2d97b422bcd99072882d1189e0446db93ada85257fbc0079"} Feb 19 21:33:58 crc kubenswrapper[4886]: I0219 21:33:58.499601 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrxv8" event={"ID":"599ea19f-0643-4915-9114-d38a38e92e6e","Type":"ContainerStarted","Data":"514772cd9bdc7b67588c9b9ab4de1e04a3552a21bbff6ebe8557907584461711"} Feb 19 21:33:59 crc kubenswrapper[4886]: I0219 21:33:59.511478 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrxv8" event={"ID":"599ea19f-0643-4915-9114-d38a38e92e6e","Type":"ContainerStarted","Data":"f8acb87d53399380ace13add25e03b2cf7dd54f8cdf8ddecba18fddded575e01"} Feb 19 21:34:04 crc kubenswrapper[4886]: I0219 21:34:04.573742 4886 generic.go:334] "Generic (PLEG): container finished" podID="599ea19f-0643-4915-9114-d38a38e92e6e" containerID="f8acb87d53399380ace13add25e03b2cf7dd54f8cdf8ddecba18fddded575e01" exitCode=0 Feb 19 21:34:04 crc kubenswrapper[4886]: I0219 21:34:04.573875 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrxv8" event={"ID":"599ea19f-0643-4915-9114-d38a38e92e6e","Type":"ContainerDied","Data":"f8acb87d53399380ace13add25e03b2cf7dd54f8cdf8ddecba18fddded575e01"} Feb 19 21:34:05 crc kubenswrapper[4886]: I0219 21:34:05.586068 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrxv8" event={"ID":"599ea19f-0643-4915-9114-d38a38e92e6e","Type":"ContainerStarted","Data":"bebb091597cea652954789ea87da75f3ed24845829035d4360c01f57ae67cf00"} Feb 19 21:34:05 crc kubenswrapper[4886]: I0219 21:34:05.606018 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qrxv8" podStartSLOduration=3.118820624 podStartE2EDuration="9.605998637s" podCreationTimestamp="2026-02-19 21:33:56 +0000 UTC" firstStartedPulling="2026-02-19 21:33:58.501378978 +0000 UTC m=+2069.129222028" lastFinishedPulling="2026-02-19 21:34:04.988556991 +0000 UTC m=+2075.616400041" observedRunningTime="2026-02-19 21:34:05.60285337 +0000 UTC m=+2076.230696440" watchObservedRunningTime="2026-02-19 21:34:05.605998637 +0000 UTC m=+2076.233841687" Feb 19 21:34:06 crc kubenswrapper[4886]: I0219 21:34:06.063012 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-gfkv8"] Feb 19 21:34:06 crc kubenswrapper[4886]: I0219 21:34:06.076112 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-gfkv8"] Feb 19 21:34:06 crc kubenswrapper[4886]: I0219 21:34:06.614410 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8426eb13-8a64-41fd-a608-0fd4d138cca1" path="/var/lib/kubelet/pods/8426eb13-8a64-41fd-a608-0fd4d138cca1/volumes" Feb 19 21:34:07 crc kubenswrapper[4886]: I0219 21:34:07.138433 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qrxv8" Feb 19 21:34:07 crc kubenswrapper[4886]: I0219 21:34:07.138760 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qrxv8" Feb 19 21:34:08 crc kubenswrapper[4886]: I0219 21:34:08.181862 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qrxv8" podUID="599ea19f-0643-4915-9114-d38a38e92e6e" containerName="registry-server" probeResult="failure" output=< Feb 19 21:34:08 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 21:34:08 crc kubenswrapper[4886]: > Feb 19 21:34:18 crc kubenswrapper[4886]: I0219 21:34:18.207568 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qrxv8" podUID="599ea19f-0643-4915-9114-d38a38e92e6e" containerName="registry-server" probeResult="failure" output=< Feb 19 21:34:18 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 21:34:18 crc kubenswrapper[4886]: > Feb 19 21:34:27 crc kubenswrapper[4886]: I0219 21:34:27.218467 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qrxv8" Feb 19 21:34:27 crc kubenswrapper[4886]: I0219 21:34:27.285555 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qrxv8" Feb 19 21:34:27 crc kubenswrapper[4886]: I0219 21:34:27.977987 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qrxv8"] Feb 19 21:34:28 crc kubenswrapper[4886]: I0219 21:34:28.867973 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qrxv8" podUID="599ea19f-0643-4915-9114-d38a38e92e6e" containerName="registry-server" containerID="cri-o://bebb091597cea652954789ea87da75f3ed24845829035d4360c01f57ae67cf00" gracePeriod=2 Feb 19 21:34:29 crc kubenswrapper[4886]: I0219 21:34:29.417753 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrxv8" Feb 19 21:34:29 crc kubenswrapper[4886]: I0219 21:34:29.509492 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/599ea19f-0643-4915-9114-d38a38e92e6e-utilities\") pod \"599ea19f-0643-4915-9114-d38a38e92e6e\" (UID: \"599ea19f-0643-4915-9114-d38a38e92e6e\") " Feb 19 21:34:29 crc kubenswrapper[4886]: I0219 21:34:29.509631 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5vhd\" (UniqueName: \"kubernetes.io/projected/599ea19f-0643-4915-9114-d38a38e92e6e-kube-api-access-f5vhd\") pod \"599ea19f-0643-4915-9114-d38a38e92e6e\" (UID: \"599ea19f-0643-4915-9114-d38a38e92e6e\") " Feb 19 21:34:29 crc kubenswrapper[4886]: I0219 21:34:29.509841 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/599ea19f-0643-4915-9114-d38a38e92e6e-catalog-content\") pod \"599ea19f-0643-4915-9114-d38a38e92e6e\" (UID: \"599ea19f-0643-4915-9114-d38a38e92e6e\") " Feb 19 21:34:29 crc kubenswrapper[4886]: I0219 21:34:29.510246 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/599ea19f-0643-4915-9114-d38a38e92e6e-utilities" (OuterVolumeSpecName: "utilities") pod "599ea19f-0643-4915-9114-d38a38e92e6e" (UID: "599ea19f-0643-4915-9114-d38a38e92e6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:34:29 crc kubenswrapper[4886]: I0219 21:34:29.510936 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/599ea19f-0643-4915-9114-d38a38e92e6e-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:34:29 crc kubenswrapper[4886]: I0219 21:34:29.517498 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/599ea19f-0643-4915-9114-d38a38e92e6e-kube-api-access-f5vhd" (OuterVolumeSpecName: "kube-api-access-f5vhd") pod "599ea19f-0643-4915-9114-d38a38e92e6e" (UID: "599ea19f-0643-4915-9114-d38a38e92e6e"). InnerVolumeSpecName "kube-api-access-f5vhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:34:29 crc kubenswrapper[4886]: I0219 21:34:29.612556 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5vhd\" (UniqueName: \"kubernetes.io/projected/599ea19f-0643-4915-9114-d38a38e92e6e-kube-api-access-f5vhd\") on node \"crc\" DevicePath \"\"" Feb 19 21:34:29 crc kubenswrapper[4886]: I0219 21:34:29.650293 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/599ea19f-0643-4915-9114-d38a38e92e6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "599ea19f-0643-4915-9114-d38a38e92e6e" (UID: "599ea19f-0643-4915-9114-d38a38e92e6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:34:29 crc kubenswrapper[4886]: I0219 21:34:29.715012 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/599ea19f-0643-4915-9114-d38a38e92e6e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:34:29 crc kubenswrapper[4886]: I0219 21:34:29.879958 4886 generic.go:334] "Generic (PLEG): container finished" podID="599ea19f-0643-4915-9114-d38a38e92e6e" containerID="bebb091597cea652954789ea87da75f3ed24845829035d4360c01f57ae67cf00" exitCode=0 Feb 19 21:34:29 crc kubenswrapper[4886]: I0219 21:34:29.880010 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrxv8" Feb 19 21:34:29 crc kubenswrapper[4886]: I0219 21:34:29.880027 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrxv8" event={"ID":"599ea19f-0643-4915-9114-d38a38e92e6e","Type":"ContainerDied","Data":"bebb091597cea652954789ea87da75f3ed24845829035d4360c01f57ae67cf00"} Feb 19 21:34:29 crc kubenswrapper[4886]: I0219 21:34:29.880987 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrxv8" event={"ID":"599ea19f-0643-4915-9114-d38a38e92e6e","Type":"ContainerDied","Data":"514772cd9bdc7b67588c9b9ab4de1e04a3552a21bbff6ebe8557907584461711"} Feb 19 21:34:29 crc kubenswrapper[4886]: I0219 21:34:29.881013 4886 scope.go:117] "RemoveContainer" containerID="bebb091597cea652954789ea87da75f3ed24845829035d4360c01f57ae67cf00" Feb 19 21:34:29 crc kubenswrapper[4886]: I0219 21:34:29.907990 4886 scope.go:117] "RemoveContainer" containerID="f8acb87d53399380ace13add25e03b2cf7dd54f8cdf8ddecba18fddded575e01" Feb 19 21:34:29 crc kubenswrapper[4886]: I0219 21:34:29.919341 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qrxv8"] Feb 19 21:34:29 crc kubenswrapper[4886]: I0219 21:34:29.928845 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qrxv8"] Feb 19 21:34:29 crc kubenswrapper[4886]: I0219 21:34:29.938700 4886 scope.go:117] "RemoveContainer" containerID="e7702cae3f6f0bcf2d97b422bcd99072882d1189e0446db93ada85257fbc0079" Feb 19 21:34:29 crc kubenswrapper[4886]: I0219 21:34:29.989399 4886 scope.go:117] "RemoveContainer" containerID="bebb091597cea652954789ea87da75f3ed24845829035d4360c01f57ae67cf00" Feb 19 21:34:29 crc kubenswrapper[4886]: E0219 21:34:29.989910 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bebb091597cea652954789ea87da75f3ed24845829035d4360c01f57ae67cf00\": container with ID starting with bebb091597cea652954789ea87da75f3ed24845829035d4360c01f57ae67cf00 not found: ID does not exist" containerID="bebb091597cea652954789ea87da75f3ed24845829035d4360c01f57ae67cf00" Feb 19 21:34:29 crc kubenswrapper[4886]: I0219 21:34:29.989938 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bebb091597cea652954789ea87da75f3ed24845829035d4360c01f57ae67cf00"} err="failed to get container status \"bebb091597cea652954789ea87da75f3ed24845829035d4360c01f57ae67cf00\": rpc error: code = NotFound desc = could not find container \"bebb091597cea652954789ea87da75f3ed24845829035d4360c01f57ae67cf00\": container with ID starting with bebb091597cea652954789ea87da75f3ed24845829035d4360c01f57ae67cf00 not found: ID does not exist" Feb 19 21:34:29 crc kubenswrapper[4886]: I0219 21:34:29.989958 4886 scope.go:117] "RemoveContainer" containerID="f8acb87d53399380ace13add25e03b2cf7dd54f8cdf8ddecba18fddded575e01" Feb 19 21:34:29 crc kubenswrapper[4886]: E0219 21:34:29.990340 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8acb87d53399380ace13add25e03b2cf7dd54f8cdf8ddecba18fddded575e01\": container with ID starting with f8acb87d53399380ace13add25e03b2cf7dd54f8cdf8ddecba18fddded575e01 not found: ID does not exist" containerID="f8acb87d53399380ace13add25e03b2cf7dd54f8cdf8ddecba18fddded575e01" Feb 19 21:34:29 crc kubenswrapper[4886]: I0219 21:34:29.990384 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8acb87d53399380ace13add25e03b2cf7dd54f8cdf8ddecba18fddded575e01"} err="failed to get container status \"f8acb87d53399380ace13add25e03b2cf7dd54f8cdf8ddecba18fddded575e01\": rpc error: code = NotFound desc = could not find container \"f8acb87d53399380ace13add25e03b2cf7dd54f8cdf8ddecba18fddded575e01\": container with ID starting with f8acb87d53399380ace13add25e03b2cf7dd54f8cdf8ddecba18fddded575e01 not found: ID does not exist" Feb 19 21:34:29 crc kubenswrapper[4886]: I0219 21:34:29.990412 4886 scope.go:117] "RemoveContainer" containerID="e7702cae3f6f0bcf2d97b422bcd99072882d1189e0446db93ada85257fbc0079" Feb 19 21:34:29 crc kubenswrapper[4886]: E0219 21:34:29.990731 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7702cae3f6f0bcf2d97b422bcd99072882d1189e0446db93ada85257fbc0079\": container with ID starting with e7702cae3f6f0bcf2d97b422bcd99072882d1189e0446db93ada85257fbc0079 not found: ID does not exist" containerID="e7702cae3f6f0bcf2d97b422bcd99072882d1189e0446db93ada85257fbc0079" Feb 19 21:34:29 crc kubenswrapper[4886]: I0219 21:34:29.990757 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7702cae3f6f0bcf2d97b422bcd99072882d1189e0446db93ada85257fbc0079"} err="failed to get container status \"e7702cae3f6f0bcf2d97b422bcd99072882d1189e0446db93ada85257fbc0079\": rpc error: code = NotFound desc = could not find container \"e7702cae3f6f0bcf2d97b422bcd99072882d1189e0446db93ada85257fbc0079\": container with ID starting with e7702cae3f6f0bcf2d97b422bcd99072882d1189e0446db93ada85257fbc0079 not found: ID does not exist" Feb 19 21:34:30 crc kubenswrapper[4886]: I0219 21:34:30.630522 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="599ea19f-0643-4915-9114-d38a38e92e6e" path="/var/lib/kubelet/pods/599ea19f-0643-4915-9114-d38a38e92e6e/volumes" Feb 19 21:34:30 crc kubenswrapper[4886]: I0219 21:34:30.896859 4886 generic.go:334] "Generic (PLEG): container finished" podID="0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7" containerID="716087b1a186fd25deda15fc21273aebf55b1092387f4228a42bc87e33c02e93" exitCode=0 Feb 19 21:34:30 crc kubenswrapper[4886]: I0219 21:34:30.896904 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t" event={"ID":"0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7","Type":"ContainerDied","Data":"716087b1a186fd25deda15fc21273aebf55b1092387f4228a42bc87e33c02e93"} Feb 19 21:34:32 crc kubenswrapper[4886]: I0219 21:34:32.457331 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t" Feb 19 21:34:32 crc kubenswrapper[4886]: I0219 21:34:32.581245 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfzjd\" (UniqueName: \"kubernetes.io/projected/0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7-kube-api-access-nfzjd\") pod \"0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7\" (UID: \"0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7\") " Feb 19 21:34:32 crc kubenswrapper[4886]: I0219 21:34:32.581770 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7-inventory\") pod \"0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7\" (UID: \"0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7\") " Feb 19 21:34:32 crc kubenswrapper[4886]: I0219 21:34:32.582049 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7-ssh-key-openstack-edpm-ipam\") pod \"0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7\" (UID: \"0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7\") " Feb 19 21:34:32 crc kubenswrapper[4886]: I0219 21:34:32.598183 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7-kube-api-access-nfzjd" (OuterVolumeSpecName: "kube-api-access-nfzjd") pod "0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7" (UID: "0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7"). InnerVolumeSpecName "kube-api-access-nfzjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:34:32 crc kubenswrapper[4886]: I0219 21:34:32.637587 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7-inventory" (OuterVolumeSpecName: "inventory") pod "0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7" (UID: "0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:34:32 crc kubenswrapper[4886]: I0219 21:34:32.638100 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7" (UID: "0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:34:32 crc kubenswrapper[4886]: I0219 21:34:32.685992 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 21:34:32 crc kubenswrapper[4886]: I0219 21:34:32.686149 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 21:34:32 crc kubenswrapper[4886]: I0219 21:34:32.686180 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfzjd\" (UniqueName: \"kubernetes.io/projected/0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7-kube-api-access-nfzjd\") on node \"crc\" DevicePath \"\"" Feb 19 21:34:32 crc kubenswrapper[4886]: I0219 21:34:32.918915 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t" event={"ID":"0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7","Type":"ContainerDied","Data":"2de9b114a71b50f72b9b1fd2625232ceacb2ccad74585c4ec9ecd61992baebf4"} Feb 19 21:34:32 crc kubenswrapper[4886]: I0219 21:34:32.918984 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2de9b114a71b50f72b9b1fd2625232ceacb2ccad74585c4ec9ecd61992baebf4" Feb 19 21:34:32 crc kubenswrapper[4886]: I0219 21:34:32.919000 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzx4t" Feb 19 21:34:33 crc kubenswrapper[4886]: I0219 21:34:33.012835 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zxqbt"] Feb 19 21:34:33 crc kubenswrapper[4886]: E0219 21:34:33.013250 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="599ea19f-0643-4915-9114-d38a38e92e6e" containerName="extract-content" Feb 19 21:34:33 crc kubenswrapper[4886]: I0219 21:34:33.013284 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="599ea19f-0643-4915-9114-d38a38e92e6e" containerName="extract-content" Feb 19 21:34:33 crc kubenswrapper[4886]: E0219 21:34:33.013311 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 21:34:33 crc kubenswrapper[4886]: I0219 21:34:33.013319 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 21:34:33 crc kubenswrapper[4886]: E0219 21:34:33.013333 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="599ea19f-0643-4915-9114-d38a38e92e6e" containerName="registry-server" Feb 19 21:34:33 crc kubenswrapper[4886]: I0219 21:34:33.013340 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="599ea19f-0643-4915-9114-d38a38e92e6e" containerName="registry-server" Feb 19 21:34:33 crc kubenswrapper[4886]: E0219 21:34:33.013364 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="599ea19f-0643-4915-9114-d38a38e92e6e" containerName="extract-utilities" Feb 19 21:34:33 crc kubenswrapper[4886]: I0219 21:34:33.013371 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="599ea19f-0643-4915-9114-d38a38e92e6e" containerName="extract-utilities" Feb 19 21:34:33 crc kubenswrapper[4886]: I0219 21:34:33.013573 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ffa8a2f-48e1-4ffb-a2cc-7626d8b464b7" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 21:34:33 crc kubenswrapper[4886]: I0219 21:34:33.013589 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="599ea19f-0643-4915-9114-d38a38e92e6e" containerName="registry-server" Feb 19 21:34:33 crc kubenswrapper[4886]: I0219 21:34:33.014432 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zxqbt" Feb 19 21:34:33 crc kubenswrapper[4886]: I0219 21:34:33.016721 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 21:34:33 crc kubenswrapper[4886]: I0219 21:34:33.016847 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 21:34:33 crc kubenswrapper[4886]: I0219 21:34:33.017055 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vq4ls" Feb 19 21:34:33 crc kubenswrapper[4886]: I0219 21:34:33.017750 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 21:34:33 crc kubenswrapper[4886]: I0219 21:34:33.035304 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zxqbt"] Feb 19 21:34:33 crc kubenswrapper[4886]: I0219 21:34:33.094688 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ad7a85f-9ad1-4787-ba93-3f9a114b35ef-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-zxqbt\" (UID: \"3ad7a85f-9ad1-4787-ba93-3f9a114b35ef\") " pod="openstack/ssh-known-hosts-edpm-deployment-zxqbt" Feb 19 21:34:33 crc kubenswrapper[4886]: I0219 21:34:33.094777 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3ad7a85f-9ad1-4787-ba93-3f9a114b35ef-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-zxqbt\" (UID: \"3ad7a85f-9ad1-4787-ba93-3f9a114b35ef\") " pod="openstack/ssh-known-hosts-edpm-deployment-zxqbt" Feb 19 21:34:33 crc kubenswrapper[4886]: I0219 21:34:33.094860 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnxkz\" (UniqueName: \"kubernetes.io/projected/3ad7a85f-9ad1-4787-ba93-3f9a114b35ef-kube-api-access-jnxkz\") pod \"ssh-known-hosts-edpm-deployment-zxqbt\" (UID: \"3ad7a85f-9ad1-4787-ba93-3f9a114b35ef\") " pod="openstack/ssh-known-hosts-edpm-deployment-zxqbt" Feb 19 21:34:33 crc kubenswrapper[4886]: I0219 21:34:33.196400 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ad7a85f-9ad1-4787-ba93-3f9a114b35ef-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-zxqbt\" (UID: \"3ad7a85f-9ad1-4787-ba93-3f9a114b35ef\") " pod="openstack/ssh-known-hosts-edpm-deployment-zxqbt" Feb 19 21:34:33 crc kubenswrapper[4886]: I0219 21:34:33.196493 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3ad7a85f-9ad1-4787-ba93-3f9a114b35ef-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-zxqbt\" (UID: \"3ad7a85f-9ad1-4787-ba93-3f9a114b35ef\") " pod="openstack/ssh-known-hosts-edpm-deployment-zxqbt" Feb 19 21:34:33 crc kubenswrapper[4886]: I0219 21:34:33.196584 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnxkz\" (UniqueName: \"kubernetes.io/projected/3ad7a85f-9ad1-4787-ba93-3f9a114b35ef-kube-api-access-jnxkz\") pod \"ssh-known-hosts-edpm-deployment-zxqbt\" (UID: \"3ad7a85f-9ad1-4787-ba93-3f9a114b35ef\") " pod="openstack/ssh-known-hosts-edpm-deployment-zxqbt" Feb 19 21:34:33 crc kubenswrapper[4886]: I0219 21:34:33.201168 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3ad7a85f-9ad1-4787-ba93-3f9a114b35ef-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-zxqbt\" (UID: \"3ad7a85f-9ad1-4787-ba93-3f9a114b35ef\") " pod="openstack/ssh-known-hosts-edpm-deployment-zxqbt" Feb 19 21:34:33 crc kubenswrapper[4886]: I0219 21:34:33.204210 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ad7a85f-9ad1-4787-ba93-3f9a114b35ef-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-zxqbt\" (UID: \"3ad7a85f-9ad1-4787-ba93-3f9a114b35ef\") " pod="openstack/ssh-known-hosts-edpm-deployment-zxqbt" Feb 19 21:34:33 crc kubenswrapper[4886]: I0219 21:34:33.217068 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnxkz\" (UniqueName: \"kubernetes.io/projected/3ad7a85f-9ad1-4787-ba93-3f9a114b35ef-kube-api-access-jnxkz\") pod \"ssh-known-hosts-edpm-deployment-zxqbt\" (UID: \"3ad7a85f-9ad1-4787-ba93-3f9a114b35ef\") " pod="openstack/ssh-known-hosts-edpm-deployment-zxqbt" Feb 19 21:34:33 crc kubenswrapper[4886]: I0219 21:34:33.338252 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zxqbt" Feb 19 21:34:33 crc kubenswrapper[4886]: I0219 21:34:33.912059 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zxqbt"] Feb 19 21:34:33 crc kubenswrapper[4886]: W0219 21:34:33.922199 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ad7a85f_9ad1_4787_ba93_3f9a114b35ef.slice/crio-00ae5cb3720e5c855564a86b0966ebfa1e268e560cdb4eb7f6e63a18385495e3 WatchSource:0}: Error finding container 00ae5cb3720e5c855564a86b0966ebfa1e268e560cdb4eb7f6e63a18385495e3: Status 404 returned error can't find the container with id 00ae5cb3720e5c855564a86b0966ebfa1e268e560cdb4eb7f6e63a18385495e3 Feb 19 21:34:34 crc kubenswrapper[4886]: I0219 21:34:34.942161 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zxqbt" event={"ID":"3ad7a85f-9ad1-4787-ba93-3f9a114b35ef","Type":"ContainerStarted","Data":"8025f8c6e74267427df52ab92ebbd20459b4ef70845ece663c3fd723c0a7f35f"} Feb 19 21:34:34 crc kubenswrapper[4886]: I0219 21:34:34.943467 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zxqbt" event={"ID":"3ad7a85f-9ad1-4787-ba93-3f9a114b35ef","Type":"ContainerStarted","Data":"00ae5cb3720e5c855564a86b0966ebfa1e268e560cdb4eb7f6e63a18385495e3"} Feb 19 21:34:34 crc kubenswrapper[4886]: I0219 21:34:34.972860 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-zxqbt" podStartSLOduration=2.484260412 podStartE2EDuration="2.972842842s" podCreationTimestamp="2026-02-19 21:34:32 +0000 UTC" firstStartedPulling="2026-02-19 21:34:33.926238455 +0000 UTC m=+2104.554081515" lastFinishedPulling="2026-02-19 21:34:34.414820905 +0000 UTC m=+2105.042663945" observedRunningTime="2026-02-19 21:34:34.957591455 +0000 UTC m=+2105.585434535" watchObservedRunningTime="2026-02-19 21:34:34.972842842 +0000 UTC m=+2105.600685892" Feb 19 21:34:42 crc kubenswrapper[4886]: I0219 21:34:42.041363 4886 generic.go:334] "Generic (PLEG): container finished" podID="3ad7a85f-9ad1-4787-ba93-3f9a114b35ef" containerID="8025f8c6e74267427df52ab92ebbd20459b4ef70845ece663c3fd723c0a7f35f" exitCode=0 Feb 19 21:34:42 crc kubenswrapper[4886]: I0219 21:34:42.041536 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zxqbt" event={"ID":"3ad7a85f-9ad1-4787-ba93-3f9a114b35ef","Type":"ContainerDied","Data":"8025f8c6e74267427df52ab92ebbd20459b4ef70845ece663c3fd723c0a7f35f"} Feb 19 21:34:43 crc kubenswrapper[4886]: I0219 21:34:43.648811 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zxqbt" Feb 19 21:34:43 crc kubenswrapper[4886]: I0219 21:34:43.823779 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ad7a85f-9ad1-4787-ba93-3f9a114b35ef-ssh-key-openstack-edpm-ipam\") pod \"3ad7a85f-9ad1-4787-ba93-3f9a114b35ef\" (UID: \"3ad7a85f-9ad1-4787-ba93-3f9a114b35ef\") " Feb 19 21:34:43 crc kubenswrapper[4886]: I0219 21:34:43.823834 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnxkz\" (UniqueName: \"kubernetes.io/projected/3ad7a85f-9ad1-4787-ba93-3f9a114b35ef-kube-api-access-jnxkz\") pod \"3ad7a85f-9ad1-4787-ba93-3f9a114b35ef\" (UID: \"3ad7a85f-9ad1-4787-ba93-3f9a114b35ef\") " Feb 19 21:34:43 crc kubenswrapper[4886]: I0219 21:34:43.823911 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3ad7a85f-9ad1-4787-ba93-3f9a114b35ef-inventory-0\") pod \"3ad7a85f-9ad1-4787-ba93-3f9a114b35ef\" (UID: \"3ad7a85f-9ad1-4787-ba93-3f9a114b35ef\") " Feb 19 21:34:43 crc kubenswrapper[4886]: I0219 21:34:43.841132 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ad7a85f-9ad1-4787-ba93-3f9a114b35ef-kube-api-access-jnxkz" (OuterVolumeSpecName: "kube-api-access-jnxkz") pod "3ad7a85f-9ad1-4787-ba93-3f9a114b35ef" (UID: "3ad7a85f-9ad1-4787-ba93-3f9a114b35ef"). InnerVolumeSpecName "kube-api-access-jnxkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:34:43 crc kubenswrapper[4886]: I0219 21:34:43.871051 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad7a85f-9ad1-4787-ba93-3f9a114b35ef-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "3ad7a85f-9ad1-4787-ba93-3f9a114b35ef" (UID: "3ad7a85f-9ad1-4787-ba93-3f9a114b35ef"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:34:43 crc kubenswrapper[4886]: I0219 21:34:43.879177 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad7a85f-9ad1-4787-ba93-3f9a114b35ef-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3ad7a85f-9ad1-4787-ba93-3f9a114b35ef" (UID: "3ad7a85f-9ad1-4787-ba93-3f9a114b35ef"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:34:43 crc kubenswrapper[4886]: I0219 21:34:43.926848 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ad7a85f-9ad1-4787-ba93-3f9a114b35ef-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 21:34:43 crc kubenswrapper[4886]: I0219 21:34:43.926881 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnxkz\" (UniqueName: \"kubernetes.io/projected/3ad7a85f-9ad1-4787-ba93-3f9a114b35ef-kube-api-access-jnxkz\") on node \"crc\" DevicePath \"\"" Feb 19 21:34:43 crc kubenswrapper[4886]: I0219 21:34:43.926891 4886 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3ad7a85f-9ad1-4787-ba93-3f9a114b35ef-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:34:44 crc kubenswrapper[4886]: I0219 21:34:44.095017 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zxqbt" event={"ID":"3ad7a85f-9ad1-4787-ba93-3f9a114b35ef","Type":"ContainerDied","Data":"00ae5cb3720e5c855564a86b0966ebfa1e268e560cdb4eb7f6e63a18385495e3"} Feb 19 21:34:44 crc kubenswrapper[4886]: I0219 21:34:44.095075 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00ae5cb3720e5c855564a86b0966ebfa1e268e560cdb4eb7f6e63a18385495e3" Feb 19 21:34:44 crc kubenswrapper[4886]: I0219 21:34:44.095127 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zxqbt" Feb 19 21:34:44 crc kubenswrapper[4886]: I0219 21:34:44.181783 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-s7rb6"] Feb 19 21:34:44 crc kubenswrapper[4886]: E0219 21:34:44.182415 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad7a85f-9ad1-4787-ba93-3f9a114b35ef" containerName="ssh-known-hosts-edpm-deployment" Feb 19 21:34:44 crc kubenswrapper[4886]: I0219 21:34:44.182446 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad7a85f-9ad1-4787-ba93-3f9a114b35ef" containerName="ssh-known-hosts-edpm-deployment" Feb 19 21:34:44 crc kubenswrapper[4886]: I0219 21:34:44.182802 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ad7a85f-9ad1-4787-ba93-3f9a114b35ef" containerName="ssh-known-hosts-edpm-deployment" Feb 19 21:34:44 crc kubenswrapper[4886]: I0219 21:34:44.183828 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s7rb6" Feb 19 21:34:44 crc kubenswrapper[4886]: I0219 21:34:44.186496 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 21:34:44 crc kubenswrapper[4886]: I0219 21:34:44.186656 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vq4ls" Feb 19 21:34:44 crc kubenswrapper[4886]: I0219 21:34:44.186697 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 21:34:44 crc kubenswrapper[4886]: I0219 21:34:44.186873 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 21:34:44 crc kubenswrapper[4886]: I0219 21:34:44.194875 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-s7rb6"] Feb 19 21:34:44 crc kubenswrapper[4886]: I0219 21:34:44.336170 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czwrw\" (UniqueName: \"kubernetes.io/projected/95410c4a-1fde-4cf9-8aa4-368d7b122905-kube-api-access-czwrw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s7rb6\" (UID: \"95410c4a-1fde-4cf9-8aa4-368d7b122905\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s7rb6" Feb 19 21:34:44 crc kubenswrapper[4886]: I0219 21:34:44.336293 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/95410c4a-1fde-4cf9-8aa4-368d7b122905-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s7rb6\" (UID: \"95410c4a-1fde-4cf9-8aa4-368d7b122905\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s7rb6" Feb 19 21:34:44 crc kubenswrapper[4886]: I0219 21:34:44.336386 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95410c4a-1fde-4cf9-8aa4-368d7b122905-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s7rb6\" (UID: \"95410c4a-1fde-4cf9-8aa4-368d7b122905\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s7rb6" Feb 19 21:34:44 crc kubenswrapper[4886]: I0219 21:34:44.439606 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czwrw\" (UniqueName: \"kubernetes.io/projected/95410c4a-1fde-4cf9-8aa4-368d7b122905-kube-api-access-czwrw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s7rb6\" (UID: \"95410c4a-1fde-4cf9-8aa4-368d7b122905\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s7rb6" Feb 19 21:34:44 crc kubenswrapper[4886]: I0219 21:34:44.439903 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/95410c4a-1fde-4cf9-8aa4-368d7b122905-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s7rb6\" (UID: \"95410c4a-1fde-4cf9-8aa4-368d7b122905\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s7rb6" Feb 19 21:34:44 crc kubenswrapper[4886]: I0219 21:34:44.439944 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95410c4a-1fde-4cf9-8aa4-368d7b122905-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s7rb6\" (UID: \"95410c4a-1fde-4cf9-8aa4-368d7b122905\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s7rb6" Feb 19 21:34:44 crc kubenswrapper[4886]: I0219 21:34:44.452684 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95410c4a-1fde-4cf9-8aa4-368d7b122905-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s7rb6\" (UID: \"95410c4a-1fde-4cf9-8aa4-368d7b122905\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s7rb6" Feb 19 21:34:44 crc kubenswrapper[4886]: I0219 21:34:44.454005 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/95410c4a-1fde-4cf9-8aa4-368d7b122905-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s7rb6\" (UID: \"95410c4a-1fde-4cf9-8aa4-368d7b122905\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s7rb6" Feb 19 21:34:44 crc kubenswrapper[4886]: I0219 21:34:44.457799 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czwrw\" (UniqueName: \"kubernetes.io/projected/95410c4a-1fde-4cf9-8aa4-368d7b122905-kube-api-access-czwrw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s7rb6\" (UID: \"95410c4a-1fde-4cf9-8aa4-368d7b122905\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s7rb6" Feb 19 21:34:44 crc kubenswrapper[4886]: I0219 21:34:44.500024 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s7rb6" Feb 19 21:34:45 crc kubenswrapper[4886]: I0219 21:34:45.069930 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-s7rb6"] Feb 19 21:34:45 crc kubenswrapper[4886]: W0219 21:34:45.073787 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95410c4a_1fde_4cf9_8aa4_368d7b122905.slice/crio-0a33ecb9633ec0f9f7d182fc4691809d9929b2b55cc1f03d8829f4c6b1a450cc WatchSource:0}: Error finding container 0a33ecb9633ec0f9f7d182fc4691809d9929b2b55cc1f03d8829f4c6b1a450cc: Status 404 returned error can't find the container with id 0a33ecb9633ec0f9f7d182fc4691809d9929b2b55cc1f03d8829f4c6b1a450cc Feb 19 21:34:45 crc kubenswrapper[4886]: I0219 21:34:45.076778 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 21:34:45 crc kubenswrapper[4886]: I0219 21:34:45.105201 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s7rb6" event={"ID":"95410c4a-1fde-4cf9-8aa4-368d7b122905","Type":"ContainerStarted","Data":"0a33ecb9633ec0f9f7d182fc4691809d9929b2b55cc1f03d8829f4c6b1a450cc"} Feb 19 21:34:46 crc kubenswrapper[4886]: I0219 21:34:46.117573 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s7rb6" event={"ID":"95410c4a-1fde-4cf9-8aa4-368d7b122905","Type":"ContainerStarted","Data":"23a25781512fa35248a3d45c90a2c9850216be0d81da7719da78cbb18769145f"} Feb 19 21:34:46 crc kubenswrapper[4886]: I0219 21:34:46.147728 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s7rb6" podStartSLOduration=1.763461987 podStartE2EDuration="2.147697937s" podCreationTimestamp="2026-02-19 21:34:44 +0000 UTC" firstStartedPulling="2026-02-19 21:34:45.076531562 +0000 UTC m=+2115.704374612" lastFinishedPulling="2026-02-19 21:34:45.460767512 +0000 UTC m=+2116.088610562" observedRunningTime="2026-02-19 21:34:46.140057078 +0000 UTC m=+2116.767900138" watchObservedRunningTime="2026-02-19 21:34:46.147697937 +0000 UTC m=+2116.775541027" Feb 19 21:34:52 crc kubenswrapper[4886]: I0219 21:34:52.707611 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bmg7c"] Feb 19 21:34:52 crc kubenswrapper[4886]: I0219 21:34:52.712348 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bmg7c" Feb 19 21:34:52 crc kubenswrapper[4886]: I0219 21:34:52.717724 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bmg7c"] Feb 19 21:34:52 crc kubenswrapper[4886]: I0219 21:34:52.750970 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/464efe10-536c-4330-ac5f-4afc3484d46c-catalog-content\") pod \"redhat-marketplace-bmg7c\" (UID: \"464efe10-536c-4330-ac5f-4afc3484d46c\") " pod="openshift-marketplace/redhat-marketplace-bmg7c" Feb 19 21:34:52 crc kubenswrapper[4886]: I0219 21:34:52.751038 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/464efe10-536c-4330-ac5f-4afc3484d46c-utilities\") pod \"redhat-marketplace-bmg7c\" (UID: \"464efe10-536c-4330-ac5f-4afc3484d46c\") " pod="openshift-marketplace/redhat-marketplace-bmg7c" Feb 19 21:34:52 crc kubenswrapper[4886]: I0219 21:34:52.751063 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmxwt\" (UniqueName: \"kubernetes.io/projected/464efe10-536c-4330-ac5f-4afc3484d46c-kube-api-access-lmxwt\") pod \"redhat-marketplace-bmg7c\" (UID: \"464efe10-536c-4330-ac5f-4afc3484d46c\") " pod="openshift-marketplace/redhat-marketplace-bmg7c" Feb 19 21:34:52 crc kubenswrapper[4886]: I0219 21:34:52.852035 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/464efe10-536c-4330-ac5f-4afc3484d46c-catalog-content\") pod \"redhat-marketplace-bmg7c\" (UID: \"464efe10-536c-4330-ac5f-4afc3484d46c\") " pod="openshift-marketplace/redhat-marketplace-bmg7c" Feb 19 21:34:52 crc kubenswrapper[4886]: I0219 21:34:52.852105 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/464efe10-536c-4330-ac5f-4afc3484d46c-utilities\") pod \"redhat-marketplace-bmg7c\" (UID: \"464efe10-536c-4330-ac5f-4afc3484d46c\") " pod="openshift-marketplace/redhat-marketplace-bmg7c" Feb 19 21:34:52 crc kubenswrapper[4886]: I0219 21:34:52.852130 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmxwt\" (UniqueName: \"kubernetes.io/projected/464efe10-536c-4330-ac5f-4afc3484d46c-kube-api-access-lmxwt\") pod \"redhat-marketplace-bmg7c\" (UID: \"464efe10-536c-4330-ac5f-4afc3484d46c\") " pod="openshift-marketplace/redhat-marketplace-bmg7c" Feb 19 21:34:52 crc kubenswrapper[4886]: I0219 21:34:52.852820 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/464efe10-536c-4330-ac5f-4afc3484d46c-catalog-content\") pod \"redhat-marketplace-bmg7c\" (UID: \"464efe10-536c-4330-ac5f-4afc3484d46c\") " pod="openshift-marketplace/redhat-marketplace-bmg7c" Feb 19 21:34:52 crc kubenswrapper[4886]: I0219 21:34:52.853032 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/464efe10-536c-4330-ac5f-4afc3484d46c-utilities\") pod \"redhat-marketplace-bmg7c\" (UID: \"464efe10-536c-4330-ac5f-4afc3484d46c\") " pod="openshift-marketplace/redhat-marketplace-bmg7c" Feb 19 21:34:52 crc kubenswrapper[4886]: I0219 21:34:52.874118 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmxwt\" (UniqueName: \"kubernetes.io/projected/464efe10-536c-4330-ac5f-4afc3484d46c-kube-api-access-lmxwt\") pod \"redhat-marketplace-bmg7c\" (UID: \"464efe10-536c-4330-ac5f-4afc3484d46c\") " pod="openshift-marketplace/redhat-marketplace-bmg7c" Feb 19 21:34:53 crc kubenswrapper[4886]: I0219 21:34:53.089037 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bmg7c" Feb 19 21:34:53 crc kubenswrapper[4886]: I0219 21:34:53.585124 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bmg7c"] Feb 19 21:34:54 crc kubenswrapper[4886]: I0219 21:34:54.215962 4886 generic.go:334] "Generic (PLEG): container finished" podID="464efe10-536c-4330-ac5f-4afc3484d46c" containerID="063f59905f736a2de9368b017542765a781ee96b421301ea6537425bd81b32cf" exitCode=0 Feb 19 21:34:54 crc kubenswrapper[4886]: I0219 21:34:54.216006 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmg7c" event={"ID":"464efe10-536c-4330-ac5f-4afc3484d46c","Type":"ContainerDied","Data":"063f59905f736a2de9368b017542765a781ee96b421301ea6537425bd81b32cf"} Feb 19 21:34:54 crc kubenswrapper[4886]: I0219 21:34:54.216163 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmg7c" event={"ID":"464efe10-536c-4330-ac5f-4afc3484d46c","Type":"ContainerStarted","Data":"3890dbbc79496dc5c2b08285a3c276a01519b0d2e68f747bbfe9c55d215a14e9"} Feb 19 21:34:54 crc kubenswrapper[4886]: I0219 21:34:54.459854 4886 scope.go:117] "RemoveContainer" containerID="5f051368ab1d2d3622745d171ab2c0994ee20061ba50c358c77770ade92477aa" Feb 19 21:34:55 crc kubenswrapper[4886]: I0219 21:34:55.229653 4886 generic.go:334] "Generic (PLEG): container finished" podID="95410c4a-1fde-4cf9-8aa4-368d7b122905" containerID="23a25781512fa35248a3d45c90a2c9850216be0d81da7719da78cbb18769145f" exitCode=0 Feb 19 21:34:55 crc kubenswrapper[4886]: I0219 21:34:55.229725 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s7rb6" event={"ID":"95410c4a-1fde-4cf9-8aa4-368d7b122905","Type":"ContainerDied","Data":"23a25781512fa35248a3d45c90a2c9850216be0d81da7719da78cbb18769145f"} Feb 19 21:34:55 crc kubenswrapper[4886]: I0219 21:34:55.232212 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmg7c" event={"ID":"464efe10-536c-4330-ac5f-4afc3484d46c","Type":"ContainerStarted","Data":"88ac4ca958c20fad3e6eaf9c638ba566ddb05a9fe8de3b16d86a2678139855ed"} Feb 19 21:34:56 crc kubenswrapper[4886]: I0219 21:34:56.253775 4886 generic.go:334] "Generic (PLEG): container finished" podID="464efe10-536c-4330-ac5f-4afc3484d46c" containerID="88ac4ca958c20fad3e6eaf9c638ba566ddb05a9fe8de3b16d86a2678139855ed" exitCode=0 Feb 19 21:34:56 crc kubenswrapper[4886]: I0219 21:34:56.253945 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmg7c" event={"ID":"464efe10-536c-4330-ac5f-4afc3484d46c","Type":"ContainerDied","Data":"88ac4ca958c20fad3e6eaf9c638ba566ddb05a9fe8de3b16d86a2678139855ed"} Feb 19 21:34:56 crc kubenswrapper[4886]: I0219 21:34:56.905243 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s7rb6" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.058891 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95410c4a-1fde-4cf9-8aa4-368d7b122905-inventory\") pod \"95410c4a-1fde-4cf9-8aa4-368d7b122905\" (UID: \"95410c4a-1fde-4cf9-8aa4-368d7b122905\") " Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.058969 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czwrw\" (UniqueName: \"kubernetes.io/projected/95410c4a-1fde-4cf9-8aa4-368d7b122905-kube-api-access-czwrw\") pod \"95410c4a-1fde-4cf9-8aa4-368d7b122905\" (UID: \"95410c4a-1fde-4cf9-8aa4-368d7b122905\") " Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.059185 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/95410c4a-1fde-4cf9-8aa4-368d7b122905-ssh-key-openstack-edpm-ipam\") pod \"95410c4a-1fde-4cf9-8aa4-368d7b122905\" (UID: \"95410c4a-1fde-4cf9-8aa4-368d7b122905\") " Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.076576 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95410c4a-1fde-4cf9-8aa4-368d7b122905-kube-api-access-czwrw" (OuterVolumeSpecName: "kube-api-access-czwrw") pod "95410c4a-1fde-4cf9-8aa4-368d7b122905" (UID: "95410c4a-1fde-4cf9-8aa4-368d7b122905"). InnerVolumeSpecName "kube-api-access-czwrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.096653 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95410c4a-1fde-4cf9-8aa4-368d7b122905-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "95410c4a-1fde-4cf9-8aa4-368d7b122905" (UID: "95410c4a-1fde-4cf9-8aa4-368d7b122905"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.109725 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95410c4a-1fde-4cf9-8aa4-368d7b122905-inventory" (OuterVolumeSpecName: "inventory") pod "95410c4a-1fde-4cf9-8aa4-368d7b122905" (UID: "95410c4a-1fde-4cf9-8aa4-368d7b122905"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.161800 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95410c4a-1fde-4cf9-8aa4-368d7b122905-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.161829 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czwrw\" (UniqueName: \"kubernetes.io/projected/95410c4a-1fde-4cf9-8aa4-368d7b122905-kube-api-access-czwrw\") on node \"crc\" DevicePath \"\"" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.161843 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/95410c4a-1fde-4cf9-8aa4-368d7b122905-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.265875 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s7rb6" event={"ID":"95410c4a-1fde-4cf9-8aa4-368d7b122905","Type":"ContainerDied","Data":"0a33ecb9633ec0f9f7d182fc4691809d9929b2b55cc1f03d8829f4c6b1a450cc"} Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.265918 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a33ecb9633ec0f9f7d182fc4691809d9929b2b55cc1f03d8829f4c6b1a450cc" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.265891 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s7rb6" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.268691 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmg7c" event={"ID":"464efe10-536c-4330-ac5f-4afc3484d46c","Type":"ContainerStarted","Data":"48b2475fba3099506ebc5e5288dc7373c4c251471f30075960544a56693c6db7"} Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.310391 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bmg7c" podStartSLOduration=2.850393587 podStartE2EDuration="5.31037177s" podCreationTimestamp="2026-02-19 21:34:52 +0000 UTC" firstStartedPulling="2026-02-19 21:34:54.220471582 +0000 UTC m=+2124.848314642" lastFinishedPulling="2026-02-19 21:34:56.680449785 +0000 UTC m=+2127.308292825" observedRunningTime="2026-02-19 21:34:57.290824147 +0000 UTC m=+2127.918667197" watchObservedRunningTime="2026-02-19 21:34:57.31037177 +0000 UTC m=+2127.938214820" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.345409 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl"] Feb 19 21:34:57 crc kubenswrapper[4886]: E0219 21:34:57.346034 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95410c4a-1fde-4cf9-8aa4-368d7b122905" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.346055 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="95410c4a-1fde-4cf9-8aa4-368d7b122905" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.346387 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="95410c4a-1fde-4cf9-8aa4-368d7b122905" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.347427 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.349703 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.349898 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vq4ls" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.350029 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.350157 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.356435 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl"] Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.469724 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rmj9\" (UniqueName: \"kubernetes.io/projected/2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7-kube-api-access-5rmj9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl\" (UID: \"2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.469850 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl\" (UID: \"2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.469940 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl\" (UID: \"2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.571425 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl\" (UID: \"2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.571526 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl\" (UID: \"2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.571638 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rmj9\" (UniqueName: \"kubernetes.io/projected/2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7-kube-api-access-5rmj9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl\" (UID: \"2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.584388 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl\" (UID: \"2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.584428 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl\" (UID: \"2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.589147 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rmj9\" (UniqueName: \"kubernetes.io/projected/2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7-kube-api-access-5rmj9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl\" (UID: \"2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl" Feb 19 21:34:57 crc kubenswrapper[4886]: I0219 21:34:57.716070 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl" Feb 19 21:34:58 crc kubenswrapper[4886]: I0219 21:34:58.368442 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl"] Feb 19 21:34:58 crc kubenswrapper[4886]: W0219 21:34:58.381467 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d3cd5a6_ab92_44ff_bdbd_a3f3e71b33a7.slice/crio-e4a785667916dc836c5d60e0c90818c3b366085114d7ea253b89205adea8cf67 WatchSource:0}: Error finding container e4a785667916dc836c5d60e0c90818c3b366085114d7ea253b89205adea8cf67: Status 404 returned error can't find the container with id e4a785667916dc836c5d60e0c90818c3b366085114d7ea253b89205adea8cf67 Feb 19 21:34:59 crc kubenswrapper[4886]: I0219 21:34:59.298431 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl" event={"ID":"2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7","Type":"ContainerStarted","Data":"851a25c2bb0602a487010b0458cecf535dd4e219ab9f8b5c7331d249a22b2da7"} Feb 19 21:34:59 crc kubenswrapper[4886]: I0219 21:34:59.298797 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl" event={"ID":"2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7","Type":"ContainerStarted","Data":"e4a785667916dc836c5d60e0c90818c3b366085114d7ea253b89205adea8cf67"} Feb 19 21:34:59 crc kubenswrapper[4886]: I0219 21:34:59.320962 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl" podStartSLOduration=1.90114597 podStartE2EDuration="2.3209448s" podCreationTimestamp="2026-02-19 21:34:57 +0000 UTC" firstStartedPulling="2026-02-19 21:34:58.384111347 +0000 UTC m=+2129.011954417" lastFinishedPulling="2026-02-19 21:34:58.803910187 +0000 UTC m=+2129.431753247" observedRunningTime="2026-02-19 21:34:59.316038139 +0000 UTC m=+2129.943881229" watchObservedRunningTime="2026-02-19 21:34:59.3209448 +0000 UTC m=+2129.948787860" Feb 19 21:35:03 crc kubenswrapper[4886]: I0219 21:35:03.089250 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bmg7c" Feb 19 21:35:03 crc kubenswrapper[4886]: I0219 21:35:03.089586 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bmg7c" Feb 19 21:35:03 crc kubenswrapper[4886]: I0219 21:35:03.148463 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bmg7c" Feb 19 21:35:03 crc kubenswrapper[4886]: I0219 21:35:03.404622 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bmg7c" Feb 19 21:35:03 crc kubenswrapper[4886]: I0219 21:35:03.462103 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bmg7c"] Feb 19 21:35:05 crc kubenswrapper[4886]: I0219 21:35:05.365797 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bmg7c" podUID="464efe10-536c-4330-ac5f-4afc3484d46c" containerName="registry-server" containerID="cri-o://48b2475fba3099506ebc5e5288dc7373c4c251471f30075960544a56693c6db7" gracePeriod=2 Feb 19 21:35:05 crc kubenswrapper[4886]: I0219 21:35:05.946176 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bmg7c" Feb 19 21:35:06 crc kubenswrapper[4886]: I0219 21:35:05.991486 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/464efe10-536c-4330-ac5f-4afc3484d46c-utilities\") pod \"464efe10-536c-4330-ac5f-4afc3484d46c\" (UID: \"464efe10-536c-4330-ac5f-4afc3484d46c\") " Feb 19 21:35:06 crc kubenswrapper[4886]: I0219 21:35:05.991852 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmxwt\" (UniqueName: \"kubernetes.io/projected/464efe10-536c-4330-ac5f-4afc3484d46c-kube-api-access-lmxwt\") pod \"464efe10-536c-4330-ac5f-4afc3484d46c\" (UID: \"464efe10-536c-4330-ac5f-4afc3484d46c\") " Feb 19 21:35:06 crc kubenswrapper[4886]: I0219 21:35:05.991882 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/464efe10-536c-4330-ac5f-4afc3484d46c-catalog-content\") pod \"464efe10-536c-4330-ac5f-4afc3484d46c\" (UID: \"464efe10-536c-4330-ac5f-4afc3484d46c\") " Feb 19 21:35:06 crc kubenswrapper[4886]: I0219 21:35:05.993763 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/464efe10-536c-4330-ac5f-4afc3484d46c-utilities" (OuterVolumeSpecName: "utilities") pod "464efe10-536c-4330-ac5f-4afc3484d46c" (UID: "464efe10-536c-4330-ac5f-4afc3484d46c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:35:06 crc kubenswrapper[4886]: I0219 21:35:06.001823 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/464efe10-536c-4330-ac5f-4afc3484d46c-kube-api-access-lmxwt" (OuterVolumeSpecName: "kube-api-access-lmxwt") pod "464efe10-536c-4330-ac5f-4afc3484d46c" (UID: "464efe10-536c-4330-ac5f-4afc3484d46c"). InnerVolumeSpecName "kube-api-access-lmxwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:35:06 crc kubenswrapper[4886]: I0219 21:35:06.018091 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/464efe10-536c-4330-ac5f-4afc3484d46c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "464efe10-536c-4330-ac5f-4afc3484d46c" (UID: "464efe10-536c-4330-ac5f-4afc3484d46c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:35:06 crc kubenswrapper[4886]: I0219 21:35:06.094253 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmxwt\" (UniqueName: \"kubernetes.io/projected/464efe10-536c-4330-ac5f-4afc3484d46c-kube-api-access-lmxwt\") on node \"crc\" DevicePath \"\"" Feb 19 21:35:06 crc kubenswrapper[4886]: I0219 21:35:06.094297 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/464efe10-536c-4330-ac5f-4afc3484d46c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:35:06 crc kubenswrapper[4886]: I0219 21:35:06.094307 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/464efe10-536c-4330-ac5f-4afc3484d46c-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:35:06 crc kubenswrapper[4886]: I0219 21:35:06.381346 4886 generic.go:334] "Generic (PLEG): container finished" podID="464efe10-536c-4330-ac5f-4afc3484d46c" containerID="48b2475fba3099506ebc5e5288dc7373c4c251471f30075960544a56693c6db7" exitCode=0 Feb 19 21:35:06 crc kubenswrapper[4886]: I0219 21:35:06.381501 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bmg7c" Feb 19 21:35:06 crc kubenswrapper[4886]: I0219 21:35:06.381554 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmg7c" event={"ID":"464efe10-536c-4330-ac5f-4afc3484d46c","Type":"ContainerDied","Data":"48b2475fba3099506ebc5e5288dc7373c4c251471f30075960544a56693c6db7"} Feb 19 21:35:06 crc kubenswrapper[4886]: I0219 21:35:06.381890 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmg7c" event={"ID":"464efe10-536c-4330-ac5f-4afc3484d46c","Type":"ContainerDied","Data":"3890dbbc79496dc5c2b08285a3c276a01519b0d2e68f747bbfe9c55d215a14e9"} Feb 19 21:35:06 crc kubenswrapper[4886]: I0219 21:35:06.381936 4886 scope.go:117] "RemoveContainer" containerID="48b2475fba3099506ebc5e5288dc7373c4c251471f30075960544a56693c6db7" Feb 19 21:35:06 crc kubenswrapper[4886]: I0219 21:35:06.412466 4886 scope.go:117] "RemoveContainer" containerID="88ac4ca958c20fad3e6eaf9c638ba566ddb05a9fe8de3b16d86a2678139855ed" Feb 19 21:35:06 crc kubenswrapper[4886]: I0219 21:35:06.460995 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bmg7c"] Feb 19 21:35:06 crc kubenswrapper[4886]: I0219 21:35:06.461043 4886 scope.go:117] "RemoveContainer" containerID="063f59905f736a2de9368b017542765a781ee96b421301ea6537425bd81b32cf" Feb 19 21:35:06 crc kubenswrapper[4886]: I0219 21:35:06.476445 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bmg7c"] Feb 19 21:35:06 crc kubenswrapper[4886]: I0219 21:35:06.506105 4886 scope.go:117] "RemoveContainer" containerID="48b2475fba3099506ebc5e5288dc7373c4c251471f30075960544a56693c6db7" Feb 19 21:35:06 crc kubenswrapper[4886]: E0219 21:35:06.514423 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48b2475fba3099506ebc5e5288dc7373c4c251471f30075960544a56693c6db7\": container with ID starting with 48b2475fba3099506ebc5e5288dc7373c4c251471f30075960544a56693c6db7 not found: ID does not exist" containerID="48b2475fba3099506ebc5e5288dc7373c4c251471f30075960544a56693c6db7" Feb 19 21:35:06 crc kubenswrapper[4886]: I0219 21:35:06.514476 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48b2475fba3099506ebc5e5288dc7373c4c251471f30075960544a56693c6db7"} err="failed to get container status \"48b2475fba3099506ebc5e5288dc7373c4c251471f30075960544a56693c6db7\": rpc error: code = NotFound desc = could not find container \"48b2475fba3099506ebc5e5288dc7373c4c251471f30075960544a56693c6db7\": container with ID starting with 48b2475fba3099506ebc5e5288dc7373c4c251471f30075960544a56693c6db7 not found: ID does not exist" Feb 19 21:35:06 crc kubenswrapper[4886]: I0219 21:35:06.514504 4886 scope.go:117] "RemoveContainer" containerID="88ac4ca958c20fad3e6eaf9c638ba566ddb05a9fe8de3b16d86a2678139855ed" Feb 19 21:35:06 crc kubenswrapper[4886]: E0219 21:35:06.517694 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88ac4ca958c20fad3e6eaf9c638ba566ddb05a9fe8de3b16d86a2678139855ed\": container with ID starting with 88ac4ca958c20fad3e6eaf9c638ba566ddb05a9fe8de3b16d86a2678139855ed not found: ID does not exist" containerID="88ac4ca958c20fad3e6eaf9c638ba566ddb05a9fe8de3b16d86a2678139855ed" Feb 19 21:35:06 crc kubenswrapper[4886]: I0219 21:35:06.517771 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88ac4ca958c20fad3e6eaf9c638ba566ddb05a9fe8de3b16d86a2678139855ed"} err="failed to get container status \"88ac4ca958c20fad3e6eaf9c638ba566ddb05a9fe8de3b16d86a2678139855ed\": rpc error: code = NotFound desc = could not find container \"88ac4ca958c20fad3e6eaf9c638ba566ddb05a9fe8de3b16d86a2678139855ed\": container with ID starting with 88ac4ca958c20fad3e6eaf9c638ba566ddb05a9fe8de3b16d86a2678139855ed not found: ID does not exist" Feb 19 21:35:06 crc kubenswrapper[4886]: I0219 21:35:06.517806 4886 scope.go:117] "RemoveContainer" containerID="063f59905f736a2de9368b017542765a781ee96b421301ea6537425bd81b32cf" Feb 19 21:35:06 crc kubenswrapper[4886]: E0219 21:35:06.521381 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"063f59905f736a2de9368b017542765a781ee96b421301ea6537425bd81b32cf\": container with ID starting with 063f59905f736a2de9368b017542765a781ee96b421301ea6537425bd81b32cf not found: ID does not exist" containerID="063f59905f736a2de9368b017542765a781ee96b421301ea6537425bd81b32cf" Feb 19 21:35:06 crc kubenswrapper[4886]: I0219 21:35:06.521418 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"063f59905f736a2de9368b017542765a781ee96b421301ea6537425bd81b32cf"} err="failed to get container status \"063f59905f736a2de9368b017542765a781ee96b421301ea6537425bd81b32cf\": rpc error: code = NotFound desc = could not find container \"063f59905f736a2de9368b017542765a781ee96b421301ea6537425bd81b32cf\": container with ID starting with 063f59905f736a2de9368b017542765a781ee96b421301ea6537425bd81b32cf not found: ID does not exist" Feb 19 21:35:06 crc kubenswrapper[4886]: I0219 21:35:06.614525 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="464efe10-536c-4330-ac5f-4afc3484d46c" path="/var/lib/kubelet/pods/464efe10-536c-4330-ac5f-4afc3484d46c/volumes" Feb 19 21:35:08 crc kubenswrapper[4886]: I0219 21:35:08.407544 4886 generic.go:334] "Generic (PLEG): container finished" podID="2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7" containerID="851a25c2bb0602a487010b0458cecf535dd4e219ab9f8b5c7331d249a22b2da7" exitCode=0 Feb 19 21:35:08 crc kubenswrapper[4886]: I0219 21:35:08.407611 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl" event={"ID":"2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7","Type":"ContainerDied","Data":"851a25c2bb0602a487010b0458cecf535dd4e219ab9f8b5c7331d249a22b2da7"} Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.072901 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.108761 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7-ssh-key-openstack-edpm-ipam\") pod \"2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7\" (UID: \"2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7\") " Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.108846 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rmj9\" (UniqueName: \"kubernetes.io/projected/2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7-kube-api-access-5rmj9\") pod \"2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7\" (UID: \"2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7\") " Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.108900 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7-inventory\") pod \"2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7\" (UID: \"2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7\") " Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.116660 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7-kube-api-access-5rmj9" (OuterVolumeSpecName: "kube-api-access-5rmj9") pod "2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7" (UID: "2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7"). InnerVolumeSpecName "kube-api-access-5rmj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.150593 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7-inventory" (OuterVolumeSpecName: "inventory") pod "2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7" (UID: "2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.152994 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7" (UID: "2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.212562 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.212607 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rmj9\" (UniqueName: \"kubernetes.io/projected/2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7-kube-api-access-5rmj9\") on node \"crc\" DevicePath \"\"" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.212620 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.436020 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl" event={"ID":"2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7","Type":"ContainerDied","Data":"e4a785667916dc836c5d60e0c90818c3b366085114d7ea253b89205adea8cf67"} Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.436386 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4a785667916dc836c5d60e0c90818c3b366085114d7ea253b89205adea8cf67" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.436451 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-clvwl" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.530632 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2"] Feb 19 21:35:10 crc kubenswrapper[4886]: E0219 21:35:10.531056 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.531072 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 21:35:10 crc kubenswrapper[4886]: E0219 21:35:10.531092 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="464efe10-536c-4330-ac5f-4afc3484d46c" containerName="registry-server" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.531100 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="464efe10-536c-4330-ac5f-4afc3484d46c" containerName="registry-server" Feb 19 21:35:10 crc kubenswrapper[4886]: E0219 21:35:10.531120 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="464efe10-536c-4330-ac5f-4afc3484d46c" containerName="extract-utilities" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.531126 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="464efe10-536c-4330-ac5f-4afc3484d46c" containerName="extract-utilities" Feb 19 21:35:10 crc kubenswrapper[4886]: E0219 21:35:10.531151 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="464efe10-536c-4330-ac5f-4afc3484d46c" containerName="extract-content" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.531157 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="464efe10-536c-4330-ac5f-4afc3484d46c" containerName="extract-content" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.531367 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="464efe10-536c-4330-ac5f-4afc3484d46c" containerName="registry-server" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.531389 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d3cd5a6-ab92-44ff-bdbd-a3f3e71b33a7" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.532222 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.534405 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.534500 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.534545 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.534676 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.535229 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vq4ls" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.535760 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.536066 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.536611 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.536811 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.557474 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2"] Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.630001 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.632669 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.632719 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.632864 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j486z\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-kube-api-access-j486z\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.632913 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.632945 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.632984 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.633068 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.633367 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.633418 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.633639 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.633919 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.633982 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.634025 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.634125 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.634203 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.736220 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j486z\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-kube-api-access-j486z\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.736309 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.736342 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.736382 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.736435 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.736500 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.736520 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.736553 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.736612 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.736634 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.736655 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.736686 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.736715 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.736757 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.736789 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.736808 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.741035 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.741756 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.743408 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.743731 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.743995 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.744638 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.744764 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.745018 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.745438 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.746020 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.746985 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.747529 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.747845 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.749709 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.764627 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.769587 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j486z\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-kube-api-access-j486z\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dllm2\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:10 crc kubenswrapper[4886]: I0219 21:35:10.873054 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:35:12 crc kubenswrapper[4886]: I0219 21:35:11.511224 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2"] Feb 19 21:35:12 crc kubenswrapper[4886]: I0219 21:35:12.463301 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" event={"ID":"41106f2d-d256-49c7-8634-be6d3566ca31","Type":"ContainerStarted","Data":"8687e0611a3dfc8857bb2dcbaf65f37266027becee7c64da9ff5d727b0b2dfb9"} Feb 19 21:35:12 crc kubenswrapper[4886]: I0219 21:35:12.463888 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" event={"ID":"41106f2d-d256-49c7-8634-be6d3566ca31","Type":"ContainerStarted","Data":"4406bac2bd4a051313bb8baa745b2cbfc0442b047b3305e2383a8fc028a596c8"} Feb 19 21:35:12 crc kubenswrapper[4886]: I0219 21:35:12.518965 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" podStartSLOduration=2.111756717 podStartE2EDuration="2.518939355s" podCreationTimestamp="2026-02-19 21:35:10 +0000 UTC" firstStartedPulling="2026-02-19 21:35:11.535743697 +0000 UTC m=+2142.163586747" lastFinishedPulling="2026-02-19 21:35:11.942926295 +0000 UTC m=+2142.570769385" observedRunningTime="2026-02-19 21:35:12.504098228 +0000 UTC m=+2143.131941288" watchObservedRunningTime="2026-02-19 21:35:12.518939355 +0000 UTC m=+2143.146782435" Feb 19 21:35:18 crc kubenswrapper[4886]: I0219 21:35:18.324320 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:35:18 crc kubenswrapper[4886]: I0219 21:35:18.324803 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:35:48 crc kubenswrapper[4886]: I0219 21:35:48.276852 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4xlrr"] Feb 19 21:35:48 crc kubenswrapper[4886]: I0219 21:35:48.280038 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xlrr" Feb 19 21:35:48 crc kubenswrapper[4886]: I0219 21:35:48.321522 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4xlrr"] Feb 19 21:35:48 crc kubenswrapper[4886]: I0219 21:35:48.324625 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:35:48 crc kubenswrapper[4886]: I0219 21:35:48.324696 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:35:48 crc kubenswrapper[4886]: I0219 21:35:48.381651 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/914e22ac-b4f4-44fa-b7e7-3e3541b44282-catalog-content\") pod \"community-operators-4xlrr\" (UID: \"914e22ac-b4f4-44fa-b7e7-3e3541b44282\") " pod="openshift-marketplace/community-operators-4xlrr" Feb 19 21:35:48 crc kubenswrapper[4886]: I0219 21:35:48.381725 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/914e22ac-b4f4-44fa-b7e7-3e3541b44282-utilities\") pod \"community-operators-4xlrr\" (UID: \"914e22ac-b4f4-44fa-b7e7-3e3541b44282\") " pod="openshift-marketplace/community-operators-4xlrr" Feb 19 21:35:48 crc kubenswrapper[4886]: I0219 21:35:48.381876 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lsxl\" (UniqueName: \"kubernetes.io/projected/914e22ac-b4f4-44fa-b7e7-3e3541b44282-kube-api-access-9lsxl\") pod \"community-operators-4xlrr\" (UID: \"914e22ac-b4f4-44fa-b7e7-3e3541b44282\") " pod="openshift-marketplace/community-operators-4xlrr" Feb 19 21:35:48 crc kubenswrapper[4886]: I0219 21:35:48.484313 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/914e22ac-b4f4-44fa-b7e7-3e3541b44282-catalog-content\") pod \"community-operators-4xlrr\" (UID: \"914e22ac-b4f4-44fa-b7e7-3e3541b44282\") " pod="openshift-marketplace/community-operators-4xlrr" Feb 19 21:35:48 crc kubenswrapper[4886]: I0219 21:35:48.484618 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/914e22ac-b4f4-44fa-b7e7-3e3541b44282-utilities\") pod \"community-operators-4xlrr\" (UID: \"914e22ac-b4f4-44fa-b7e7-3e3541b44282\") " pod="openshift-marketplace/community-operators-4xlrr" Feb 19 21:35:48 crc kubenswrapper[4886]: I0219 21:35:48.484867 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lsxl\" (UniqueName: \"kubernetes.io/projected/914e22ac-b4f4-44fa-b7e7-3e3541b44282-kube-api-access-9lsxl\") pod \"community-operators-4xlrr\" (UID: \"914e22ac-b4f4-44fa-b7e7-3e3541b44282\") " pod="openshift-marketplace/community-operators-4xlrr" Feb 19 21:35:48 crc kubenswrapper[4886]: I0219 21:35:48.484898 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/914e22ac-b4f4-44fa-b7e7-3e3541b44282-catalog-content\") pod \"community-operators-4xlrr\" (UID: \"914e22ac-b4f4-44fa-b7e7-3e3541b44282\") " pod="openshift-marketplace/community-operators-4xlrr" Feb 19 21:35:48 crc kubenswrapper[4886]: I0219 21:35:48.485164 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/914e22ac-b4f4-44fa-b7e7-3e3541b44282-utilities\") pod \"community-operators-4xlrr\" (UID: \"914e22ac-b4f4-44fa-b7e7-3e3541b44282\") " pod="openshift-marketplace/community-operators-4xlrr" Feb 19 21:35:48 crc kubenswrapper[4886]: I0219 21:35:48.506030 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lsxl\" (UniqueName: \"kubernetes.io/projected/914e22ac-b4f4-44fa-b7e7-3e3541b44282-kube-api-access-9lsxl\") pod \"community-operators-4xlrr\" (UID: \"914e22ac-b4f4-44fa-b7e7-3e3541b44282\") " pod="openshift-marketplace/community-operators-4xlrr" Feb 19 21:35:48 crc kubenswrapper[4886]: I0219 21:35:48.612483 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xlrr" Feb 19 21:35:49 crc kubenswrapper[4886]: I0219 21:35:49.132727 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4xlrr"] Feb 19 21:35:49 crc kubenswrapper[4886]: I0219 21:35:49.974366 4886 generic.go:334] "Generic (PLEG): container finished" podID="914e22ac-b4f4-44fa-b7e7-3e3541b44282" containerID="3e36f499b66773b26cd7da72c52b1c4f5210f67309b840eaf7fcd316a3387e8e" exitCode=0 Feb 19 21:35:49 crc kubenswrapper[4886]: I0219 21:35:49.974467 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xlrr" event={"ID":"914e22ac-b4f4-44fa-b7e7-3e3541b44282","Type":"ContainerDied","Data":"3e36f499b66773b26cd7da72c52b1c4f5210f67309b840eaf7fcd316a3387e8e"} Feb 19 21:35:49 crc kubenswrapper[4886]: I0219 21:35:49.975120 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xlrr" event={"ID":"914e22ac-b4f4-44fa-b7e7-3e3541b44282","Type":"ContainerStarted","Data":"6e66b71b6d083cfc067a747bb0dd382600ae51d74fb0d61b00c3f0b8cdf59605"} Feb 19 21:35:50 crc kubenswrapper[4886]: I0219 21:35:50.988328 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xlrr" event={"ID":"914e22ac-b4f4-44fa-b7e7-3e3541b44282","Type":"ContainerStarted","Data":"df8554912888db8f10ae1119d529cebd1a12a68d2ff3e4b4ef862d747f71cc5b"} Feb 19 21:35:52 crc kubenswrapper[4886]: I0219 21:35:52.013177 4886 generic.go:334] "Generic (PLEG): container finished" podID="914e22ac-b4f4-44fa-b7e7-3e3541b44282" containerID="df8554912888db8f10ae1119d529cebd1a12a68d2ff3e4b4ef862d747f71cc5b" exitCode=0 Feb 19 21:35:52 crc kubenswrapper[4886]: I0219 21:35:52.013910 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xlrr" event={"ID":"914e22ac-b4f4-44fa-b7e7-3e3541b44282","Type":"ContainerDied","Data":"df8554912888db8f10ae1119d529cebd1a12a68d2ff3e4b4ef862d747f71cc5b"} Feb 19 21:35:54 crc kubenswrapper[4886]: I0219 21:35:54.042722 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xlrr" event={"ID":"914e22ac-b4f4-44fa-b7e7-3e3541b44282","Type":"ContainerStarted","Data":"fb2094c95f9e66026a9cbfe2f03b9104a87435e686b594800ecfb2c8e6d8ccbd"} Feb 19 21:35:55 crc kubenswrapper[4886]: I0219 21:35:55.442632 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4xlrr" podStartSLOduration=5.019725143 podStartE2EDuration="7.442586817s" podCreationTimestamp="2026-02-19 21:35:48 +0000 UTC" firstStartedPulling="2026-02-19 21:35:49.97768422 +0000 UTC m=+2180.605527280" lastFinishedPulling="2026-02-19 21:35:52.400545904 +0000 UTC m=+2183.028388954" observedRunningTime="2026-02-19 21:35:54.066792432 +0000 UTC m=+2184.694635502" watchObservedRunningTime="2026-02-19 21:35:55.442586817 +0000 UTC m=+2186.070429867" Feb 19 21:35:55 crc kubenswrapper[4886]: I0219 21:35:55.447512 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7559n"] Feb 19 21:35:55 crc kubenswrapper[4886]: I0219 21:35:55.451247 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7559n" Feb 19 21:35:55 crc kubenswrapper[4886]: I0219 21:35:55.458142 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7559n"] Feb 19 21:35:55 crc kubenswrapper[4886]: I0219 21:35:55.555320 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxs5s\" (UniqueName: \"kubernetes.io/projected/1991e5ac-1590-4371-bd94-05e86e5b475e-kube-api-access-zxs5s\") pod \"certified-operators-7559n\" (UID: \"1991e5ac-1590-4371-bd94-05e86e5b475e\") " pod="openshift-marketplace/certified-operators-7559n" Feb 19 21:35:55 crc kubenswrapper[4886]: I0219 21:35:55.555523 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1991e5ac-1590-4371-bd94-05e86e5b475e-utilities\") pod \"certified-operators-7559n\" (UID: \"1991e5ac-1590-4371-bd94-05e86e5b475e\") " pod="openshift-marketplace/certified-operators-7559n" Feb 19 21:35:55 crc kubenswrapper[4886]: I0219 21:35:55.555638 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1991e5ac-1590-4371-bd94-05e86e5b475e-catalog-content\") pod \"certified-operators-7559n\" (UID: \"1991e5ac-1590-4371-bd94-05e86e5b475e\") " pod="openshift-marketplace/certified-operators-7559n" Feb 19 21:35:55 crc kubenswrapper[4886]: I0219 21:35:55.657587 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1991e5ac-1590-4371-bd94-05e86e5b475e-utilities\") pod \"certified-operators-7559n\" (UID: \"1991e5ac-1590-4371-bd94-05e86e5b475e\") " pod="openshift-marketplace/certified-operators-7559n" Feb 19 21:35:55 crc kubenswrapper[4886]: I0219 21:35:55.657664 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1991e5ac-1590-4371-bd94-05e86e5b475e-catalog-content\") pod \"certified-operators-7559n\" (UID: \"1991e5ac-1590-4371-bd94-05e86e5b475e\") " pod="openshift-marketplace/certified-operators-7559n" Feb 19 21:35:55 crc kubenswrapper[4886]: I0219 21:35:55.657856 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxs5s\" (UniqueName: \"kubernetes.io/projected/1991e5ac-1590-4371-bd94-05e86e5b475e-kube-api-access-zxs5s\") pod \"certified-operators-7559n\" (UID: \"1991e5ac-1590-4371-bd94-05e86e5b475e\") " pod="openshift-marketplace/certified-operators-7559n" Feb 19 21:35:55 crc kubenswrapper[4886]: I0219 21:35:55.658150 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1991e5ac-1590-4371-bd94-05e86e5b475e-utilities\") pod \"certified-operators-7559n\" (UID: \"1991e5ac-1590-4371-bd94-05e86e5b475e\") " pod="openshift-marketplace/certified-operators-7559n" Feb 19 21:35:55 crc kubenswrapper[4886]: I0219 21:35:55.658220 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1991e5ac-1590-4371-bd94-05e86e5b475e-catalog-content\") pod \"certified-operators-7559n\" (UID: \"1991e5ac-1590-4371-bd94-05e86e5b475e\") " pod="openshift-marketplace/certified-operators-7559n" Feb 19 21:35:55 crc kubenswrapper[4886]: I0219 21:35:55.681911 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxs5s\" (UniqueName: \"kubernetes.io/projected/1991e5ac-1590-4371-bd94-05e86e5b475e-kube-api-access-zxs5s\") pod \"certified-operators-7559n\" (UID: \"1991e5ac-1590-4371-bd94-05e86e5b475e\") " pod="openshift-marketplace/certified-operators-7559n" Feb 19 21:35:55 crc kubenswrapper[4886]: I0219 21:35:55.774791 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7559n" Feb 19 21:35:56 crc kubenswrapper[4886]: I0219 21:35:56.307278 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7559n"] Feb 19 21:35:57 crc kubenswrapper[4886]: I0219 21:35:57.075283 4886 generic.go:334] "Generic (PLEG): container finished" podID="1991e5ac-1590-4371-bd94-05e86e5b475e" containerID="d45c23e401adb8e8f15e4536f44cec1c655859e72b654d53391a8dfab83fcfb9" exitCode=0 Feb 19 21:35:57 crc kubenswrapper[4886]: I0219 21:35:57.075373 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7559n" event={"ID":"1991e5ac-1590-4371-bd94-05e86e5b475e","Type":"ContainerDied","Data":"d45c23e401adb8e8f15e4536f44cec1c655859e72b654d53391a8dfab83fcfb9"} Feb 19 21:35:57 crc kubenswrapper[4886]: I0219 21:35:57.075779 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7559n" event={"ID":"1991e5ac-1590-4371-bd94-05e86e5b475e","Type":"ContainerStarted","Data":"33c7b7e6e789839c80b25b129f103b4a2a4a2b15f10e7f5f4bc683726b05b2b0"} Feb 19 21:35:58 crc kubenswrapper[4886]: I0219 21:35:58.094534 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7559n" event={"ID":"1991e5ac-1590-4371-bd94-05e86e5b475e","Type":"ContainerStarted","Data":"4a86d1a6480f6d71c1a42246ede166b37bcc89d2df93c7fd03066954d9c08bc2"} Feb 19 21:35:58 crc kubenswrapper[4886]: I0219 21:35:58.615650 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4xlrr" Feb 19 21:35:58 crc kubenswrapper[4886]: I0219 21:35:58.616020 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4xlrr" Feb 19 21:35:58 crc kubenswrapper[4886]: I0219 21:35:58.662917 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4xlrr" Feb 19 21:35:59 crc kubenswrapper[4886]: I0219 21:35:59.116148 4886 generic.go:334] "Generic (PLEG): container finished" podID="41106f2d-d256-49c7-8634-be6d3566ca31" containerID="8687e0611a3dfc8857bb2dcbaf65f37266027becee7c64da9ff5d727b0b2dfb9" exitCode=0 Feb 19 21:35:59 crc kubenswrapper[4886]: I0219 21:35:59.116294 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" event={"ID":"41106f2d-d256-49c7-8634-be6d3566ca31","Type":"ContainerDied","Data":"8687e0611a3dfc8857bb2dcbaf65f37266027becee7c64da9ff5d727b0b2dfb9"} Feb 19 21:35:59 crc kubenswrapper[4886]: I0219 21:35:59.209502 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4xlrr" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.135861 4886 generic.go:334] "Generic (PLEG): container finished" podID="1991e5ac-1590-4371-bd94-05e86e5b475e" containerID="4a86d1a6480f6d71c1a42246ede166b37bcc89d2df93c7fd03066954d9c08bc2" exitCode=0 Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.136100 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7559n" event={"ID":"1991e5ac-1590-4371-bd94-05e86e5b475e","Type":"ContainerDied","Data":"4a86d1a6480f6d71c1a42246ede166b37bcc89d2df93c7fd03066954d9c08bc2"} Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.740409 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.799672 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-libvirt-combined-ca-bundle\") pod \"41106f2d-d256-49c7-8634-be6d3566ca31\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.799772 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"41106f2d-d256-49c7-8634-be6d3566ca31\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.799808 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"41106f2d-d256-49c7-8634-be6d3566ca31\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.799833 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-telemetry-combined-ca-bundle\") pod \"41106f2d-d256-49c7-8634-be6d3566ca31\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.799849 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-ssh-key-openstack-edpm-ipam\") pod \"41106f2d-d256-49c7-8634-be6d3566ca31\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.799940 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-ovn-combined-ca-bundle\") pod \"41106f2d-d256-49c7-8634-be6d3566ca31\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.799999 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-neutron-metadata-combined-ca-bundle\") pod \"41106f2d-d256-49c7-8634-be6d3566ca31\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.800067 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"41106f2d-d256-49c7-8634-be6d3566ca31\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.800093 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-nova-combined-ca-bundle\") pod \"41106f2d-d256-49c7-8634-be6d3566ca31\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.800178 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-repo-setup-combined-ca-bundle\") pod \"41106f2d-d256-49c7-8634-be6d3566ca31\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.800237 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"41106f2d-d256-49c7-8634-be6d3566ca31\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.800305 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-bootstrap-combined-ca-bundle\") pod \"41106f2d-d256-49c7-8634-be6d3566ca31\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.800327 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-inventory\") pod \"41106f2d-d256-49c7-8634-be6d3566ca31\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.800344 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-ovn-default-certs-0\") pod \"41106f2d-d256-49c7-8634-be6d3566ca31\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.800391 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j486z\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-kube-api-access-j486z\") pod \"41106f2d-d256-49c7-8634-be6d3566ca31\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.800411 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-telemetry-power-monitoring-combined-ca-bundle\") pod \"41106f2d-d256-49c7-8634-be6d3566ca31\" (UID: \"41106f2d-d256-49c7-8634-be6d3566ca31\") " Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.819409 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-kube-api-access-j486z" (OuterVolumeSpecName: "kube-api-access-j486z") pod "41106f2d-d256-49c7-8634-be6d3566ca31" (UID: "41106f2d-d256-49c7-8634-be6d3566ca31"). InnerVolumeSpecName "kube-api-access-j486z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.819421 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "41106f2d-d256-49c7-8634-be6d3566ca31" (UID: "41106f2d-d256-49c7-8634-be6d3566ca31"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.819476 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "41106f2d-d256-49c7-8634-be6d3566ca31" (UID: "41106f2d-d256-49c7-8634-be6d3566ca31"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.819629 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "41106f2d-d256-49c7-8634-be6d3566ca31" (UID: "41106f2d-d256-49c7-8634-be6d3566ca31"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.819726 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "41106f2d-d256-49c7-8634-be6d3566ca31" (UID: "41106f2d-d256-49c7-8634-be6d3566ca31"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.820003 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "41106f2d-d256-49c7-8634-be6d3566ca31" (UID: "41106f2d-d256-49c7-8634-be6d3566ca31"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.821337 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "41106f2d-d256-49c7-8634-be6d3566ca31" (UID: "41106f2d-d256-49c7-8634-be6d3566ca31"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.821411 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "41106f2d-d256-49c7-8634-be6d3566ca31" (UID: "41106f2d-d256-49c7-8634-be6d3566ca31"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.821538 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "41106f2d-d256-49c7-8634-be6d3566ca31" (UID: "41106f2d-d256-49c7-8634-be6d3566ca31"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.822135 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "41106f2d-d256-49c7-8634-be6d3566ca31" (UID: "41106f2d-d256-49c7-8634-be6d3566ca31"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.822217 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "41106f2d-d256-49c7-8634-be6d3566ca31" (UID: "41106f2d-d256-49c7-8634-be6d3566ca31"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.822327 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "41106f2d-d256-49c7-8634-be6d3566ca31" (UID: "41106f2d-d256-49c7-8634-be6d3566ca31"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.842518 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "41106f2d-d256-49c7-8634-be6d3566ca31" (UID: "41106f2d-d256-49c7-8634-be6d3566ca31"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.844364 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "41106f2d-d256-49c7-8634-be6d3566ca31" (UID: "41106f2d-d256-49c7-8634-be6d3566ca31"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.858866 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "41106f2d-d256-49c7-8634-be6d3566ca31" (UID: "41106f2d-d256-49c7-8634-be6d3566ca31"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.867171 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-inventory" (OuterVolumeSpecName: "inventory") pod "41106f2d-d256-49c7-8634-be6d3566ca31" (UID: "41106f2d-d256-49c7-8634-be6d3566ca31"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.902786 4886 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.902837 4886 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.902849 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.902859 4886 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.902869 4886 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.902880 4886 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.902892 4886 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.902904 4886 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.902918 4886 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.902935 4886 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.902947 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.902959 4886 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.902970 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j486z\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-kube-api-access-j486z\") on node \"crc\" DevicePath \"\"" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.902984 4886 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.902996 4886 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41106f2d-d256-49c7-8634-be6d3566ca31-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:36:00 crc kubenswrapper[4886]: I0219 21:36:00.903009 4886 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41106f2d-d256-49c7-8634-be6d3566ca31-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.149884 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" event={"ID":"41106f2d-d256-49c7-8634-be6d3566ca31","Type":"ContainerDied","Data":"4406bac2bd4a051313bb8baa745b2cbfc0442b047b3305e2383a8fc028a596c8"} Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.150168 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4406bac2bd4a051313bb8baa745b2cbfc0442b047b3305e2383a8fc028a596c8" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.149915 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dllm2" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.152996 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7559n" event={"ID":"1991e5ac-1590-4371-bd94-05e86e5b475e","Type":"ContainerStarted","Data":"9f698fe939b064b81890c0690ee84c245485ddbdbc293e016491341317792346"} Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.201759 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7559n" podStartSLOduration=2.754036438 podStartE2EDuration="6.201736812s" podCreationTimestamp="2026-02-19 21:35:55 +0000 UTC" firstStartedPulling="2026-02-19 21:35:57.078928246 +0000 UTC m=+2187.706771336" lastFinishedPulling="2026-02-19 21:36:00.52662865 +0000 UTC m=+2191.154471710" observedRunningTime="2026-02-19 21:36:01.178947898 +0000 UTC m=+2191.806790948" watchObservedRunningTime="2026-02-19 21:36:01.201736812 +0000 UTC m=+2191.829579872" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.260577 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r"] Feb 19 21:36:01 crc kubenswrapper[4886]: E0219 21:36:01.261141 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41106f2d-d256-49c7-8634-be6d3566ca31" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.261160 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="41106f2d-d256-49c7-8634-be6d3566ca31" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.261428 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="41106f2d-d256-49c7-8634-be6d3566ca31" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.262325 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.265575 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.265630 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.265775 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vq4ls" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.265868 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.265989 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.274293 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r"] Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.311113 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df405dfb-90a8-4968-a9da-54ca95103251-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-72v9r\" (UID: \"df405dfb-90a8-4968-a9da-54ca95103251\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.311166 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df405dfb-90a8-4968-a9da-54ca95103251-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-72v9r\" (UID: \"df405dfb-90a8-4968-a9da-54ca95103251\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.311193 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df405dfb-90a8-4968-a9da-54ca95103251-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-72v9r\" (UID: \"df405dfb-90a8-4968-a9da-54ca95103251\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.311257 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mbfm\" (UniqueName: \"kubernetes.io/projected/df405dfb-90a8-4968-a9da-54ca95103251-kube-api-access-7mbfm\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-72v9r\" (UID: \"df405dfb-90a8-4968-a9da-54ca95103251\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.311392 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/df405dfb-90a8-4968-a9da-54ca95103251-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-72v9r\" (UID: \"df405dfb-90a8-4968-a9da-54ca95103251\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.413866 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df405dfb-90a8-4968-a9da-54ca95103251-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-72v9r\" (UID: \"df405dfb-90a8-4968-a9da-54ca95103251\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.413912 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df405dfb-90a8-4968-a9da-54ca95103251-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-72v9r\" (UID: \"df405dfb-90a8-4968-a9da-54ca95103251\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.413939 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df405dfb-90a8-4968-a9da-54ca95103251-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-72v9r\" (UID: \"df405dfb-90a8-4968-a9da-54ca95103251\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.413970 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mbfm\" (UniqueName: \"kubernetes.io/projected/df405dfb-90a8-4968-a9da-54ca95103251-kube-api-access-7mbfm\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-72v9r\" (UID: \"df405dfb-90a8-4968-a9da-54ca95103251\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.414045 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/df405dfb-90a8-4968-a9da-54ca95103251-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-72v9r\" (UID: \"df405dfb-90a8-4968-a9da-54ca95103251\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.415571 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/df405dfb-90a8-4968-a9da-54ca95103251-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-72v9r\" (UID: \"df405dfb-90a8-4968-a9da-54ca95103251\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.418365 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df405dfb-90a8-4968-a9da-54ca95103251-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-72v9r\" (UID: \"df405dfb-90a8-4968-a9da-54ca95103251\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.418666 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df405dfb-90a8-4968-a9da-54ca95103251-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-72v9r\" (UID: \"df405dfb-90a8-4968-a9da-54ca95103251\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.421057 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df405dfb-90a8-4968-a9da-54ca95103251-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-72v9r\" (UID: \"df405dfb-90a8-4968-a9da-54ca95103251\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.432111 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mbfm\" (UniqueName: \"kubernetes.io/projected/df405dfb-90a8-4968-a9da-54ca95103251-kube-api-access-7mbfm\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-72v9r\" (UID: \"df405dfb-90a8-4968-a9da-54ca95103251\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r" Feb 19 21:36:01 crc kubenswrapper[4886]: I0219 21:36:01.588004 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r" Feb 19 21:36:02 crc kubenswrapper[4886]: I0219 21:36:02.172212 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r"] Feb 19 21:36:02 crc kubenswrapper[4886]: W0219 21:36:02.177954 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf405dfb_90a8_4968_a9da_54ca95103251.slice/crio-21c9a8985c0749ad98a5bfae5fba529275e08483e40d829d7e3fa86d398f483d WatchSource:0}: Error finding container 21c9a8985c0749ad98a5bfae5fba529275e08483e40d829d7e3fa86d398f483d: Status 404 returned error can't find the container with id 21c9a8985c0749ad98a5bfae5fba529275e08483e40d829d7e3fa86d398f483d Feb 19 21:36:02 crc kubenswrapper[4886]: I0219 21:36:02.633966 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4xlrr"] Feb 19 21:36:02 crc kubenswrapper[4886]: I0219 21:36:02.634185 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4xlrr" podUID="914e22ac-b4f4-44fa-b7e7-3e3541b44282" containerName="registry-server" containerID="cri-o://fb2094c95f9e66026a9cbfe2f03b9104a87435e686b594800ecfb2c8e6d8ccbd" gracePeriod=2 Feb 19 21:36:02 crc kubenswrapper[4886]: E0219 21:36:02.695827 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod914e22ac_b4f4_44fa_b7e7_3e3541b44282.slice/crio-fb2094c95f9e66026a9cbfe2f03b9104a87435e686b594800ecfb2c8e6d8ccbd.scope\": RecentStats: unable to find data in memory cache]" Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.117312 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xlrr" Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.178321 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lsxl\" (UniqueName: \"kubernetes.io/projected/914e22ac-b4f4-44fa-b7e7-3e3541b44282-kube-api-access-9lsxl\") pod \"914e22ac-b4f4-44fa-b7e7-3e3541b44282\" (UID: \"914e22ac-b4f4-44fa-b7e7-3e3541b44282\") " Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.178371 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/914e22ac-b4f4-44fa-b7e7-3e3541b44282-utilities\") pod \"914e22ac-b4f4-44fa-b7e7-3e3541b44282\" (UID: \"914e22ac-b4f4-44fa-b7e7-3e3541b44282\") " Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.178422 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/914e22ac-b4f4-44fa-b7e7-3e3541b44282-catalog-content\") pod \"914e22ac-b4f4-44fa-b7e7-3e3541b44282\" (UID: \"914e22ac-b4f4-44fa-b7e7-3e3541b44282\") " Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.180430 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/914e22ac-b4f4-44fa-b7e7-3e3541b44282-utilities" (OuterVolumeSpecName: "utilities") pod "914e22ac-b4f4-44fa-b7e7-3e3541b44282" (UID: "914e22ac-b4f4-44fa-b7e7-3e3541b44282"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.191838 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/914e22ac-b4f4-44fa-b7e7-3e3541b44282-kube-api-access-9lsxl" (OuterVolumeSpecName: "kube-api-access-9lsxl") pod "914e22ac-b4f4-44fa-b7e7-3e3541b44282" (UID: "914e22ac-b4f4-44fa-b7e7-3e3541b44282"). InnerVolumeSpecName "kube-api-access-9lsxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.197763 4886 generic.go:334] "Generic (PLEG): container finished" podID="914e22ac-b4f4-44fa-b7e7-3e3541b44282" containerID="fb2094c95f9e66026a9cbfe2f03b9104a87435e686b594800ecfb2c8e6d8ccbd" exitCode=0 Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.197822 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xlrr" event={"ID":"914e22ac-b4f4-44fa-b7e7-3e3541b44282","Type":"ContainerDied","Data":"fb2094c95f9e66026a9cbfe2f03b9104a87435e686b594800ecfb2c8e6d8ccbd"} Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.197849 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xlrr" event={"ID":"914e22ac-b4f4-44fa-b7e7-3e3541b44282","Type":"ContainerDied","Data":"6e66b71b6d083cfc067a747bb0dd382600ae51d74fb0d61b00c3f0b8cdf59605"} Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.197858 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xlrr" Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.197865 4886 scope.go:117] "RemoveContainer" containerID="fb2094c95f9e66026a9cbfe2f03b9104a87435e686b594800ecfb2c8e6d8ccbd" Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.200474 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r" event={"ID":"df405dfb-90a8-4968-a9da-54ca95103251","Type":"ContainerStarted","Data":"5c6ac5108cd8d3e0faff9dbb15952d9eeae32811b0f9424e57064d9973b2a801"} Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.200523 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r" event={"ID":"df405dfb-90a8-4968-a9da-54ca95103251","Type":"ContainerStarted","Data":"21c9a8985c0749ad98a5bfae5fba529275e08483e40d829d7e3fa86d398f483d"} Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.231489 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r" podStartSLOduration=1.733135965 podStartE2EDuration="2.231469536s" podCreationTimestamp="2026-02-19 21:36:01 +0000 UTC" firstStartedPulling="2026-02-19 21:36:02.180832739 +0000 UTC m=+2192.808675789" lastFinishedPulling="2026-02-19 21:36:02.67916627 +0000 UTC m=+2193.307009360" observedRunningTime="2026-02-19 21:36:03.222242528 +0000 UTC m=+2193.850085578" watchObservedRunningTime="2026-02-19 21:36:03.231469536 +0000 UTC m=+2193.859312586" Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.248382 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/914e22ac-b4f4-44fa-b7e7-3e3541b44282-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "914e22ac-b4f4-44fa-b7e7-3e3541b44282" (UID: "914e22ac-b4f4-44fa-b7e7-3e3541b44282"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.251823 4886 scope.go:117] "RemoveContainer" containerID="df8554912888db8f10ae1119d529cebd1a12a68d2ff3e4b4ef862d747f71cc5b" Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.274605 4886 scope.go:117] "RemoveContainer" containerID="3e36f499b66773b26cd7da72c52b1c4f5210f67309b840eaf7fcd316a3387e8e" Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.281035 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lsxl\" (UniqueName: \"kubernetes.io/projected/914e22ac-b4f4-44fa-b7e7-3e3541b44282-kube-api-access-9lsxl\") on node \"crc\" DevicePath \"\"" Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.281065 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/914e22ac-b4f4-44fa-b7e7-3e3541b44282-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.281103 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/914e22ac-b4f4-44fa-b7e7-3e3541b44282-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.294978 4886 scope.go:117] "RemoveContainer" containerID="fb2094c95f9e66026a9cbfe2f03b9104a87435e686b594800ecfb2c8e6d8ccbd" Feb 19 21:36:03 crc kubenswrapper[4886]: E0219 21:36:03.299339 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb2094c95f9e66026a9cbfe2f03b9104a87435e686b594800ecfb2c8e6d8ccbd\": container with ID starting with fb2094c95f9e66026a9cbfe2f03b9104a87435e686b594800ecfb2c8e6d8ccbd not found: ID does not exist" containerID="fb2094c95f9e66026a9cbfe2f03b9104a87435e686b594800ecfb2c8e6d8ccbd" Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.299371 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb2094c95f9e66026a9cbfe2f03b9104a87435e686b594800ecfb2c8e6d8ccbd"} err="failed to get container status \"fb2094c95f9e66026a9cbfe2f03b9104a87435e686b594800ecfb2c8e6d8ccbd\": rpc error: code = NotFound desc = could not find container \"fb2094c95f9e66026a9cbfe2f03b9104a87435e686b594800ecfb2c8e6d8ccbd\": container with ID starting with fb2094c95f9e66026a9cbfe2f03b9104a87435e686b594800ecfb2c8e6d8ccbd not found: ID does not exist" Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.299393 4886 scope.go:117] "RemoveContainer" containerID="df8554912888db8f10ae1119d529cebd1a12a68d2ff3e4b4ef862d747f71cc5b" Feb 19 21:36:03 crc kubenswrapper[4886]: E0219 21:36:03.299604 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df8554912888db8f10ae1119d529cebd1a12a68d2ff3e4b4ef862d747f71cc5b\": container with ID starting with df8554912888db8f10ae1119d529cebd1a12a68d2ff3e4b4ef862d747f71cc5b not found: ID does not exist" containerID="df8554912888db8f10ae1119d529cebd1a12a68d2ff3e4b4ef862d747f71cc5b" Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.299629 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df8554912888db8f10ae1119d529cebd1a12a68d2ff3e4b4ef862d747f71cc5b"} err="failed to get container status \"df8554912888db8f10ae1119d529cebd1a12a68d2ff3e4b4ef862d747f71cc5b\": rpc error: code = NotFound desc = could not find container \"df8554912888db8f10ae1119d529cebd1a12a68d2ff3e4b4ef862d747f71cc5b\": container with ID starting with df8554912888db8f10ae1119d529cebd1a12a68d2ff3e4b4ef862d747f71cc5b not found: ID does not exist" Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.299644 4886 scope.go:117] "RemoveContainer" containerID="3e36f499b66773b26cd7da72c52b1c4f5210f67309b840eaf7fcd316a3387e8e" Feb 19 21:36:03 crc kubenswrapper[4886]: E0219 21:36:03.299830 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e36f499b66773b26cd7da72c52b1c4f5210f67309b840eaf7fcd316a3387e8e\": container with ID starting with 3e36f499b66773b26cd7da72c52b1c4f5210f67309b840eaf7fcd316a3387e8e not found: ID does not exist" containerID="3e36f499b66773b26cd7da72c52b1c4f5210f67309b840eaf7fcd316a3387e8e" Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.299851 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e36f499b66773b26cd7da72c52b1c4f5210f67309b840eaf7fcd316a3387e8e"} err="failed to get container status \"3e36f499b66773b26cd7da72c52b1c4f5210f67309b840eaf7fcd316a3387e8e\": rpc error: code = NotFound desc = could not find container \"3e36f499b66773b26cd7da72c52b1c4f5210f67309b840eaf7fcd316a3387e8e\": container with ID starting with 3e36f499b66773b26cd7da72c52b1c4f5210f67309b840eaf7fcd316a3387e8e not found: ID does not exist" Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.554843 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4xlrr"] Feb 19 21:36:03 crc kubenswrapper[4886]: I0219 21:36:03.569141 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4xlrr"] Feb 19 21:36:04 crc kubenswrapper[4886]: I0219 21:36:04.621442 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="914e22ac-b4f4-44fa-b7e7-3e3541b44282" path="/var/lib/kubelet/pods/914e22ac-b4f4-44fa-b7e7-3e3541b44282/volumes" Feb 19 21:36:05 crc kubenswrapper[4886]: I0219 21:36:05.775789 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7559n" Feb 19 21:36:05 crc kubenswrapper[4886]: I0219 21:36:05.775876 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7559n" Feb 19 21:36:05 crc kubenswrapper[4886]: I0219 21:36:05.845450 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7559n" Feb 19 21:36:06 crc kubenswrapper[4886]: I0219 21:36:06.309082 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7559n" Feb 19 21:36:06 crc kubenswrapper[4886]: I0219 21:36:06.838981 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7559n"] Feb 19 21:36:08 crc kubenswrapper[4886]: I0219 21:36:08.062848 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-xx9l4"] Feb 19 21:36:08 crc kubenswrapper[4886]: I0219 21:36:08.076429 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-xx9l4"] Feb 19 21:36:08 crc kubenswrapper[4886]: I0219 21:36:08.262594 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7559n" podUID="1991e5ac-1590-4371-bd94-05e86e5b475e" containerName="registry-server" containerID="cri-o://9f698fe939b064b81890c0690ee84c245485ddbdbc293e016491341317792346" gracePeriod=2 Feb 19 21:36:08 crc kubenswrapper[4886]: I0219 21:36:08.616908 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="489598d7-0933-4271-92d3-26a24263c4bc" path="/var/lib/kubelet/pods/489598d7-0933-4271-92d3-26a24263c4bc/volumes" Feb 19 21:36:08 crc kubenswrapper[4886]: I0219 21:36:08.763471 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7559n" Feb 19 21:36:08 crc kubenswrapper[4886]: I0219 21:36:08.835104 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1991e5ac-1590-4371-bd94-05e86e5b475e-catalog-content\") pod \"1991e5ac-1590-4371-bd94-05e86e5b475e\" (UID: \"1991e5ac-1590-4371-bd94-05e86e5b475e\") " Feb 19 21:36:08 crc kubenswrapper[4886]: I0219 21:36:08.835394 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1991e5ac-1590-4371-bd94-05e86e5b475e-utilities\") pod \"1991e5ac-1590-4371-bd94-05e86e5b475e\" (UID: \"1991e5ac-1590-4371-bd94-05e86e5b475e\") " Feb 19 21:36:08 crc kubenswrapper[4886]: I0219 21:36:08.835435 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxs5s\" (UniqueName: \"kubernetes.io/projected/1991e5ac-1590-4371-bd94-05e86e5b475e-kube-api-access-zxs5s\") pod \"1991e5ac-1590-4371-bd94-05e86e5b475e\" (UID: \"1991e5ac-1590-4371-bd94-05e86e5b475e\") " Feb 19 21:36:08 crc kubenswrapper[4886]: I0219 21:36:08.836978 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1991e5ac-1590-4371-bd94-05e86e5b475e-utilities" (OuterVolumeSpecName: "utilities") pod "1991e5ac-1590-4371-bd94-05e86e5b475e" (UID: "1991e5ac-1590-4371-bd94-05e86e5b475e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:36:08 crc kubenswrapper[4886]: I0219 21:36:08.837400 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1991e5ac-1590-4371-bd94-05e86e5b475e-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:36:08 crc kubenswrapper[4886]: I0219 21:36:08.849580 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1991e5ac-1590-4371-bd94-05e86e5b475e-kube-api-access-zxs5s" (OuterVolumeSpecName: "kube-api-access-zxs5s") pod "1991e5ac-1590-4371-bd94-05e86e5b475e" (UID: "1991e5ac-1590-4371-bd94-05e86e5b475e"). InnerVolumeSpecName "kube-api-access-zxs5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:36:08 crc kubenswrapper[4886]: I0219 21:36:08.903076 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1991e5ac-1590-4371-bd94-05e86e5b475e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1991e5ac-1590-4371-bd94-05e86e5b475e" (UID: "1991e5ac-1590-4371-bd94-05e86e5b475e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:36:08 crc kubenswrapper[4886]: I0219 21:36:08.939959 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1991e5ac-1590-4371-bd94-05e86e5b475e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:36:08 crc kubenswrapper[4886]: I0219 21:36:08.940002 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxs5s\" (UniqueName: \"kubernetes.io/projected/1991e5ac-1590-4371-bd94-05e86e5b475e-kube-api-access-zxs5s\") on node \"crc\" DevicePath \"\"" Feb 19 21:36:09 crc kubenswrapper[4886]: I0219 21:36:09.276197 4886 generic.go:334] "Generic (PLEG): container finished" podID="1991e5ac-1590-4371-bd94-05e86e5b475e" containerID="9f698fe939b064b81890c0690ee84c245485ddbdbc293e016491341317792346" exitCode=0 Feb 19 21:36:09 crc kubenswrapper[4886]: I0219 21:36:09.276259 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7559n" event={"ID":"1991e5ac-1590-4371-bd94-05e86e5b475e","Type":"ContainerDied","Data":"9f698fe939b064b81890c0690ee84c245485ddbdbc293e016491341317792346"} Feb 19 21:36:09 crc kubenswrapper[4886]: I0219 21:36:09.276303 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7559n" Feb 19 21:36:09 crc kubenswrapper[4886]: I0219 21:36:09.276325 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7559n" event={"ID":"1991e5ac-1590-4371-bd94-05e86e5b475e","Type":"ContainerDied","Data":"33c7b7e6e789839c80b25b129f103b4a2a4a2b15f10e7f5f4bc683726b05b2b0"} Feb 19 21:36:09 crc kubenswrapper[4886]: I0219 21:36:09.276353 4886 scope.go:117] "RemoveContainer" containerID="9f698fe939b064b81890c0690ee84c245485ddbdbc293e016491341317792346" Feb 19 21:36:09 crc kubenswrapper[4886]: I0219 21:36:09.302858 4886 scope.go:117] "RemoveContainer" containerID="4a86d1a6480f6d71c1a42246ede166b37bcc89d2df93c7fd03066954d9c08bc2" Feb 19 21:36:09 crc kubenswrapper[4886]: I0219 21:36:09.316790 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7559n"] Feb 19 21:36:09 crc kubenswrapper[4886]: I0219 21:36:09.325219 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7559n"] Feb 19 21:36:09 crc kubenswrapper[4886]: I0219 21:36:09.329227 4886 scope.go:117] "RemoveContainer" containerID="d45c23e401adb8e8f15e4536f44cec1c655859e72b654d53391a8dfab83fcfb9" Feb 19 21:36:09 crc kubenswrapper[4886]: I0219 21:36:09.402533 4886 scope.go:117] "RemoveContainer" containerID="9f698fe939b064b81890c0690ee84c245485ddbdbc293e016491341317792346" Feb 19 21:36:09 crc kubenswrapper[4886]: E0219 21:36:09.402955 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f698fe939b064b81890c0690ee84c245485ddbdbc293e016491341317792346\": container with ID starting with 9f698fe939b064b81890c0690ee84c245485ddbdbc293e016491341317792346 not found: ID does not exist" containerID="9f698fe939b064b81890c0690ee84c245485ddbdbc293e016491341317792346" Feb 19 21:36:09 crc kubenswrapper[4886]: I0219 21:36:09.403022 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f698fe939b064b81890c0690ee84c245485ddbdbc293e016491341317792346"} err="failed to get container status \"9f698fe939b064b81890c0690ee84c245485ddbdbc293e016491341317792346\": rpc error: code = NotFound desc = could not find container \"9f698fe939b064b81890c0690ee84c245485ddbdbc293e016491341317792346\": container with ID starting with 9f698fe939b064b81890c0690ee84c245485ddbdbc293e016491341317792346 not found: ID does not exist" Feb 19 21:36:09 crc kubenswrapper[4886]: I0219 21:36:09.403065 4886 scope.go:117] "RemoveContainer" containerID="4a86d1a6480f6d71c1a42246ede166b37bcc89d2df93c7fd03066954d9c08bc2" Feb 19 21:36:09 crc kubenswrapper[4886]: E0219 21:36:09.403505 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a86d1a6480f6d71c1a42246ede166b37bcc89d2df93c7fd03066954d9c08bc2\": container with ID starting with 4a86d1a6480f6d71c1a42246ede166b37bcc89d2df93c7fd03066954d9c08bc2 not found: ID does not exist" containerID="4a86d1a6480f6d71c1a42246ede166b37bcc89d2df93c7fd03066954d9c08bc2" Feb 19 21:36:09 crc kubenswrapper[4886]: I0219 21:36:09.403547 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a86d1a6480f6d71c1a42246ede166b37bcc89d2df93c7fd03066954d9c08bc2"} err="failed to get container status \"4a86d1a6480f6d71c1a42246ede166b37bcc89d2df93c7fd03066954d9c08bc2\": rpc error: code = NotFound desc = could not find container \"4a86d1a6480f6d71c1a42246ede166b37bcc89d2df93c7fd03066954d9c08bc2\": container with ID starting with 4a86d1a6480f6d71c1a42246ede166b37bcc89d2df93c7fd03066954d9c08bc2 not found: ID does not exist" Feb 19 21:36:09 crc kubenswrapper[4886]: I0219 21:36:09.403573 4886 scope.go:117] "RemoveContainer" containerID="d45c23e401adb8e8f15e4536f44cec1c655859e72b654d53391a8dfab83fcfb9" Feb 19 21:36:09 crc kubenswrapper[4886]: E0219 21:36:09.403915 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d45c23e401adb8e8f15e4536f44cec1c655859e72b654d53391a8dfab83fcfb9\": container with ID starting with d45c23e401adb8e8f15e4536f44cec1c655859e72b654d53391a8dfab83fcfb9 not found: ID does not exist" containerID="d45c23e401adb8e8f15e4536f44cec1c655859e72b654d53391a8dfab83fcfb9" Feb 19 21:36:09 crc kubenswrapper[4886]: I0219 21:36:09.403961 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d45c23e401adb8e8f15e4536f44cec1c655859e72b654d53391a8dfab83fcfb9"} err="failed to get container status \"d45c23e401adb8e8f15e4536f44cec1c655859e72b654d53391a8dfab83fcfb9\": rpc error: code = NotFound desc = could not find container \"d45c23e401adb8e8f15e4536f44cec1c655859e72b654d53391a8dfab83fcfb9\": container with ID starting with d45c23e401adb8e8f15e4536f44cec1c655859e72b654d53391a8dfab83fcfb9 not found: ID does not exist" Feb 19 21:36:10 crc kubenswrapper[4886]: I0219 21:36:10.625041 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1991e5ac-1590-4371-bd94-05e86e5b475e" path="/var/lib/kubelet/pods/1991e5ac-1590-4371-bd94-05e86e5b475e/volumes" Feb 19 21:36:18 crc kubenswrapper[4886]: I0219 21:36:18.324622 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:36:18 crc kubenswrapper[4886]: I0219 21:36:18.325365 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:36:18 crc kubenswrapper[4886]: I0219 21:36:18.325428 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 21:36:18 crc kubenswrapper[4886]: I0219 21:36:18.326628 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795"} pod="openshift-machine-config-operator/machine-config-daemon-6stm5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 21:36:18 crc kubenswrapper[4886]: I0219 21:36:18.326720 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" containerID="cri-o://25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" gracePeriod=600 Feb 19 21:36:18 crc kubenswrapper[4886]: E0219 21:36:18.453493 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:36:19 crc kubenswrapper[4886]: I0219 21:36:19.411039 4886 generic.go:334] "Generic (PLEG): container finished" podID="b096c32d-4192-4529-bc55-b05d09004007" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" exitCode=0 Feb 19 21:36:19 crc kubenswrapper[4886]: I0219 21:36:19.411218 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerDied","Data":"25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795"} Feb 19 21:36:19 crc kubenswrapper[4886]: I0219 21:36:19.411406 4886 scope.go:117] "RemoveContainer" containerID="35564522d2e7778839438b05f01b84ea628562673e5c8941499cfff7719ef457" Feb 19 21:36:19 crc kubenswrapper[4886]: I0219 21:36:19.412232 4886 scope.go:117] "RemoveContainer" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" Feb 19 21:36:19 crc kubenswrapper[4886]: E0219 21:36:19.412596 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:36:34 crc kubenswrapper[4886]: I0219 21:36:34.603286 4886 scope.go:117] "RemoveContainer" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" Feb 19 21:36:34 crc kubenswrapper[4886]: E0219 21:36:34.604229 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:36:47 crc kubenswrapper[4886]: I0219 21:36:47.601788 4886 scope.go:117] "RemoveContainer" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" Feb 19 21:36:47 crc kubenswrapper[4886]: E0219 21:36:47.602748 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:36:51 crc kubenswrapper[4886]: I0219 21:36:51.044923 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-zdcl5"] Feb 19 21:36:51 crc kubenswrapper[4886]: I0219 21:36:51.058900 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-zdcl5"] Feb 19 21:36:52 crc kubenswrapper[4886]: I0219 21:36:52.618289 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3c85441-b11d-4dc2-8964-872b7934ef4c" path="/var/lib/kubelet/pods/d3c85441-b11d-4dc2-8964-872b7934ef4c/volumes" Feb 19 21:36:54 crc kubenswrapper[4886]: I0219 21:36:54.597423 4886 scope.go:117] "RemoveContainer" containerID="853dad30726458ee5a39b1d7e6075ca6062ae495261685ab362f5526a7d55ec0" Feb 19 21:36:54 crc kubenswrapper[4886]: I0219 21:36:54.651057 4886 scope.go:117] "RemoveContainer" containerID="50388f689c236d04897ea174f8b7774e8be5e7752b4ca82e711f1e1cad6502a4" Feb 19 21:37:02 crc kubenswrapper[4886]: I0219 21:37:02.601988 4886 scope.go:117] "RemoveContainer" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" Feb 19 21:37:02 crc kubenswrapper[4886]: E0219 21:37:02.602955 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:37:07 crc kubenswrapper[4886]: I0219 21:37:07.017666 4886 generic.go:334] "Generic (PLEG): container finished" podID="df405dfb-90a8-4968-a9da-54ca95103251" containerID="5c6ac5108cd8d3e0faff9dbb15952d9eeae32811b0f9424e57064d9973b2a801" exitCode=0 Feb 19 21:37:07 crc kubenswrapper[4886]: I0219 21:37:07.017761 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r" event={"ID":"df405dfb-90a8-4968-a9da-54ca95103251","Type":"ContainerDied","Data":"5c6ac5108cd8d3e0faff9dbb15952d9eeae32811b0f9424e57064d9973b2a801"} Feb 19 21:37:08 crc kubenswrapper[4886]: I0219 21:37:08.615813 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r" Feb 19 21:37:08 crc kubenswrapper[4886]: I0219 21:37:08.719178 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/df405dfb-90a8-4968-a9da-54ca95103251-ovncontroller-config-0\") pod \"df405dfb-90a8-4968-a9da-54ca95103251\" (UID: \"df405dfb-90a8-4968-a9da-54ca95103251\") " Feb 19 21:37:08 crc kubenswrapper[4886]: I0219 21:37:08.719469 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df405dfb-90a8-4968-a9da-54ca95103251-ssh-key-openstack-edpm-ipam\") pod \"df405dfb-90a8-4968-a9da-54ca95103251\" (UID: \"df405dfb-90a8-4968-a9da-54ca95103251\") " Feb 19 21:37:08 crc kubenswrapper[4886]: I0219 21:37:08.719929 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mbfm\" (UniqueName: \"kubernetes.io/projected/df405dfb-90a8-4968-a9da-54ca95103251-kube-api-access-7mbfm\") pod \"df405dfb-90a8-4968-a9da-54ca95103251\" (UID: \"df405dfb-90a8-4968-a9da-54ca95103251\") " Feb 19 21:37:08 crc kubenswrapper[4886]: I0219 21:37:08.720047 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df405dfb-90a8-4968-a9da-54ca95103251-ovn-combined-ca-bundle\") pod \"df405dfb-90a8-4968-a9da-54ca95103251\" (UID: \"df405dfb-90a8-4968-a9da-54ca95103251\") " Feb 19 21:37:08 crc kubenswrapper[4886]: I0219 21:37:08.720557 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df405dfb-90a8-4968-a9da-54ca95103251-inventory\") pod \"df405dfb-90a8-4968-a9da-54ca95103251\" (UID: \"df405dfb-90a8-4968-a9da-54ca95103251\") " Feb 19 21:37:08 crc kubenswrapper[4886]: I0219 21:37:08.726273 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df405dfb-90a8-4968-a9da-54ca95103251-kube-api-access-7mbfm" (OuterVolumeSpecName: "kube-api-access-7mbfm") pod "df405dfb-90a8-4968-a9da-54ca95103251" (UID: "df405dfb-90a8-4968-a9da-54ca95103251"). InnerVolumeSpecName "kube-api-access-7mbfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:37:08 crc kubenswrapper[4886]: I0219 21:37:08.727429 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df405dfb-90a8-4968-a9da-54ca95103251-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "df405dfb-90a8-4968-a9da-54ca95103251" (UID: "df405dfb-90a8-4968-a9da-54ca95103251"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:37:08 crc kubenswrapper[4886]: I0219 21:37:08.750446 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df405dfb-90a8-4968-a9da-54ca95103251-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "df405dfb-90a8-4968-a9da-54ca95103251" (UID: "df405dfb-90a8-4968-a9da-54ca95103251"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:37:08 crc kubenswrapper[4886]: I0219 21:37:08.752202 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df405dfb-90a8-4968-a9da-54ca95103251-inventory" (OuterVolumeSpecName: "inventory") pod "df405dfb-90a8-4968-a9da-54ca95103251" (UID: "df405dfb-90a8-4968-a9da-54ca95103251"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:37:08 crc kubenswrapper[4886]: I0219 21:37:08.758308 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df405dfb-90a8-4968-a9da-54ca95103251-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "df405dfb-90a8-4968-a9da-54ca95103251" (UID: "df405dfb-90a8-4968-a9da-54ca95103251"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:37:08 crc kubenswrapper[4886]: I0219 21:37:08.825443 4886 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/df405dfb-90a8-4968-a9da-54ca95103251-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:37:08 crc kubenswrapper[4886]: I0219 21:37:08.825470 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df405dfb-90a8-4968-a9da-54ca95103251-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 21:37:08 crc kubenswrapper[4886]: I0219 21:37:08.825479 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mbfm\" (UniqueName: \"kubernetes.io/projected/df405dfb-90a8-4968-a9da-54ca95103251-kube-api-access-7mbfm\") on node \"crc\" DevicePath \"\"" Feb 19 21:37:08 crc kubenswrapper[4886]: I0219 21:37:08.825489 4886 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df405dfb-90a8-4968-a9da-54ca95103251-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:37:08 crc kubenswrapper[4886]: I0219 21:37:08.825498 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df405dfb-90a8-4968-a9da-54ca95103251-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.049841 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r" event={"ID":"df405dfb-90a8-4968-a9da-54ca95103251","Type":"ContainerDied","Data":"21c9a8985c0749ad98a5bfae5fba529275e08483e40d829d7e3fa86d398f483d"} Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.049900 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21c9a8985c0749ad98a5bfae5fba529275e08483e40d829d7e3fa86d398f483d" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.049951 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-72v9r" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.197440 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7"] Feb 19 21:37:09 crc kubenswrapper[4886]: E0219 21:37:09.198510 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1991e5ac-1590-4371-bd94-05e86e5b475e" containerName="extract-content" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.198557 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="1991e5ac-1590-4371-bd94-05e86e5b475e" containerName="extract-content" Feb 19 21:37:09 crc kubenswrapper[4886]: E0219 21:37:09.198580 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1991e5ac-1590-4371-bd94-05e86e5b475e" containerName="extract-utilities" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.198597 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="1991e5ac-1590-4371-bd94-05e86e5b475e" containerName="extract-utilities" Feb 19 21:37:09 crc kubenswrapper[4886]: E0219 21:37:09.198648 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df405dfb-90a8-4968-a9da-54ca95103251" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.198669 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="df405dfb-90a8-4968-a9da-54ca95103251" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 21:37:09 crc kubenswrapper[4886]: E0219 21:37:09.198721 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1991e5ac-1590-4371-bd94-05e86e5b475e" containerName="registry-server" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.198737 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="1991e5ac-1590-4371-bd94-05e86e5b475e" containerName="registry-server" Feb 19 21:37:09 crc kubenswrapper[4886]: E0219 21:37:09.198786 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914e22ac-b4f4-44fa-b7e7-3e3541b44282" containerName="registry-server" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.198805 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="914e22ac-b4f4-44fa-b7e7-3e3541b44282" containerName="registry-server" Feb 19 21:37:09 crc kubenswrapper[4886]: E0219 21:37:09.198836 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914e22ac-b4f4-44fa-b7e7-3e3541b44282" containerName="extract-utilities" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.198853 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="914e22ac-b4f4-44fa-b7e7-3e3541b44282" containerName="extract-utilities" Feb 19 21:37:09 crc kubenswrapper[4886]: E0219 21:37:09.198875 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914e22ac-b4f4-44fa-b7e7-3e3541b44282" containerName="extract-content" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.198897 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="914e22ac-b4f4-44fa-b7e7-3e3541b44282" containerName="extract-content" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.199459 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="1991e5ac-1590-4371-bd94-05e86e5b475e" containerName="registry-server" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.199527 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="914e22ac-b4f4-44fa-b7e7-3e3541b44282" containerName="registry-server" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.199564 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="df405dfb-90a8-4968-a9da-54ca95103251" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.201476 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.205913 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.207754 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.208367 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vq4ls" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.208724 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.208918 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7"] Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.209193 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.209458 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.234809 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7\" (UID: \"e44f4649-e868-412d-88f9-07fcfea3e71a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.234981 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7\" (UID: \"e44f4649-e868-412d-88f9-07fcfea3e71a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.235017 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7\" (UID: \"e44f4649-e868-412d-88f9-07fcfea3e71a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.235225 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhfsd\" (UniqueName: \"kubernetes.io/projected/e44f4649-e868-412d-88f9-07fcfea3e71a-kube-api-access-qhfsd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7\" (UID: \"e44f4649-e868-412d-88f9-07fcfea3e71a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.235563 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7\" (UID: \"e44f4649-e868-412d-88f9-07fcfea3e71a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.235667 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7\" (UID: \"e44f4649-e868-412d-88f9-07fcfea3e71a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.336876 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7\" (UID: \"e44f4649-e868-412d-88f9-07fcfea3e71a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.336926 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7\" (UID: \"e44f4649-e868-412d-88f9-07fcfea3e71a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.337000 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhfsd\" (UniqueName: \"kubernetes.io/projected/e44f4649-e868-412d-88f9-07fcfea3e71a-kube-api-access-qhfsd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7\" (UID: \"e44f4649-e868-412d-88f9-07fcfea3e71a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.337093 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7\" (UID: \"e44f4649-e868-412d-88f9-07fcfea3e71a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.337122 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7\" (UID: \"e44f4649-e868-412d-88f9-07fcfea3e71a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.337184 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7\" (UID: \"e44f4649-e868-412d-88f9-07fcfea3e71a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.341408 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7\" (UID: \"e44f4649-e868-412d-88f9-07fcfea3e71a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.341639 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7\" (UID: \"e44f4649-e868-412d-88f9-07fcfea3e71a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.343273 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7\" (UID: \"e44f4649-e868-412d-88f9-07fcfea3e71a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.344609 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7\" (UID: \"e44f4649-e868-412d-88f9-07fcfea3e71a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.349698 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7\" (UID: \"e44f4649-e868-412d-88f9-07fcfea3e71a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.365871 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhfsd\" (UniqueName: \"kubernetes.io/projected/e44f4649-e868-412d-88f9-07fcfea3e71a-kube-api-access-qhfsd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7\" (UID: \"e44f4649-e868-412d-88f9-07fcfea3e71a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" Feb 19 21:37:09 crc kubenswrapper[4886]: I0219 21:37:09.528671 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" Feb 19 21:37:10 crc kubenswrapper[4886]: I0219 21:37:10.170606 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7"] Feb 19 21:37:11 crc kubenswrapper[4886]: I0219 21:37:11.085608 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" event={"ID":"e44f4649-e868-412d-88f9-07fcfea3e71a","Type":"ContainerStarted","Data":"ebbd91725f82839a1eff969437697865c0f38dbbdad41afed83523109b1e4ca6"} Feb 19 21:37:11 crc kubenswrapper[4886]: I0219 21:37:11.086003 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" event={"ID":"e44f4649-e868-412d-88f9-07fcfea3e71a","Type":"ContainerStarted","Data":"61ae6816f0935a0fb7ba8a95525ce1159ae7ef2890dd0594c0911a27027a8803"} Feb 19 21:37:11 crc kubenswrapper[4886]: I0219 21:37:11.133556 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" podStartSLOduration=1.699505529 podStartE2EDuration="2.133529492s" podCreationTimestamp="2026-02-19 21:37:09 +0000 UTC" firstStartedPulling="2026-02-19 21:37:10.177772763 +0000 UTC m=+2260.805615813" lastFinishedPulling="2026-02-19 21:37:10.611796716 +0000 UTC m=+2261.239639776" observedRunningTime="2026-02-19 21:37:11.111234202 +0000 UTC m=+2261.739077272" watchObservedRunningTime="2026-02-19 21:37:11.133529492 +0000 UTC m=+2261.761372582" Feb 19 21:37:13 crc kubenswrapper[4886]: I0219 21:37:13.602074 4886 scope.go:117] "RemoveContainer" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" Feb 19 21:37:13 crc kubenswrapper[4886]: E0219 21:37:13.603043 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:37:28 crc kubenswrapper[4886]: I0219 21:37:28.601828 4886 scope.go:117] "RemoveContainer" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" Feb 19 21:37:28 crc kubenswrapper[4886]: E0219 21:37:28.608155 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:37:42 crc kubenswrapper[4886]: I0219 21:37:42.601370 4886 scope.go:117] "RemoveContainer" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" Feb 19 21:37:42 crc kubenswrapper[4886]: E0219 21:37:42.602311 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:37:56 crc kubenswrapper[4886]: I0219 21:37:56.601323 4886 scope.go:117] "RemoveContainer" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" Feb 19 21:37:56 crc kubenswrapper[4886]: E0219 21:37:56.602124 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:38:00 crc kubenswrapper[4886]: I0219 21:38:00.670209 4886 generic.go:334] "Generic (PLEG): container finished" podID="e44f4649-e868-412d-88f9-07fcfea3e71a" containerID="ebbd91725f82839a1eff969437697865c0f38dbbdad41afed83523109b1e4ca6" exitCode=0 Feb 19 21:38:00 crc kubenswrapper[4886]: I0219 21:38:00.670272 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" event={"ID":"e44f4649-e868-412d-88f9-07fcfea3e71a","Type":"ContainerDied","Data":"ebbd91725f82839a1eff969437697865c0f38dbbdad41afed83523109b1e4ca6"} Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.215885 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.329863 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-nova-metadata-neutron-config-0\") pod \"e44f4649-e868-412d-88f9-07fcfea3e71a\" (UID: \"e44f4649-e868-412d-88f9-07fcfea3e71a\") " Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.329979 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhfsd\" (UniqueName: \"kubernetes.io/projected/e44f4649-e868-412d-88f9-07fcfea3e71a-kube-api-access-qhfsd\") pod \"e44f4649-e868-412d-88f9-07fcfea3e71a\" (UID: \"e44f4649-e868-412d-88f9-07fcfea3e71a\") " Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.330020 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-neutron-metadata-combined-ca-bundle\") pod \"e44f4649-e868-412d-88f9-07fcfea3e71a\" (UID: \"e44f4649-e868-412d-88f9-07fcfea3e71a\") " Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.330115 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-inventory\") pod \"e44f4649-e868-412d-88f9-07fcfea3e71a\" (UID: \"e44f4649-e868-412d-88f9-07fcfea3e71a\") " Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.330167 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"e44f4649-e868-412d-88f9-07fcfea3e71a\" (UID: \"e44f4649-e868-412d-88f9-07fcfea3e71a\") " Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.330340 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-ssh-key-openstack-edpm-ipam\") pod \"e44f4649-e868-412d-88f9-07fcfea3e71a\" (UID: \"e44f4649-e868-412d-88f9-07fcfea3e71a\") " Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.337468 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e44f4649-e868-412d-88f9-07fcfea3e71a" (UID: "e44f4649-e868-412d-88f9-07fcfea3e71a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.337600 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e44f4649-e868-412d-88f9-07fcfea3e71a-kube-api-access-qhfsd" (OuterVolumeSpecName: "kube-api-access-qhfsd") pod "e44f4649-e868-412d-88f9-07fcfea3e71a" (UID: "e44f4649-e868-412d-88f9-07fcfea3e71a"). InnerVolumeSpecName "kube-api-access-qhfsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.361887 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "e44f4649-e868-412d-88f9-07fcfea3e71a" (UID: "e44f4649-e868-412d-88f9-07fcfea3e71a"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.362553 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "e44f4649-e868-412d-88f9-07fcfea3e71a" (UID: "e44f4649-e868-412d-88f9-07fcfea3e71a"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.371107 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-inventory" (OuterVolumeSpecName: "inventory") pod "e44f4649-e868-412d-88f9-07fcfea3e71a" (UID: "e44f4649-e868-412d-88f9-07fcfea3e71a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.387977 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e44f4649-e868-412d-88f9-07fcfea3e71a" (UID: "e44f4649-e868-412d-88f9-07fcfea3e71a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.433614 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.433670 4886 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.433689 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhfsd\" (UniqueName: \"kubernetes.io/projected/e44f4649-e868-412d-88f9-07fcfea3e71a-kube-api-access-qhfsd\") on node \"crc\" DevicePath \"\"" Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.433710 4886 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.433732 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.433748 4886 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e44f4649-e868-412d-88f9-07fcfea3e71a-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.693473 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" event={"ID":"e44f4649-e868-412d-88f9-07fcfea3e71a","Type":"ContainerDied","Data":"61ae6816f0935a0fb7ba8a95525ce1159ae7ef2890dd0594c0911a27027a8803"} Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.693828 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61ae6816f0935a0fb7ba8a95525ce1159ae7ef2890dd0594c0911a27027a8803" Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.693544 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-52sb7" Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.949444 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p"] Feb 19 21:38:02 crc kubenswrapper[4886]: E0219 21:38:02.950465 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e44f4649-e868-412d-88f9-07fcfea3e71a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.950482 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e44f4649-e868-412d-88f9-07fcfea3e71a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.950947 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="e44f4649-e868-412d-88f9-07fcfea3e71a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.952045 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p" Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.957705 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.958137 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.958245 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vq4ls" Feb 19 21:38:02 crc kubenswrapper[4886]: I0219 21:38:02.998592 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 21:38:03 crc kubenswrapper[4886]: I0219 21:38:03.003979 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p"] Feb 19 21:38:03 crc kubenswrapper[4886]: I0219 21:38:03.008193 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 19 21:38:03 crc kubenswrapper[4886]: I0219 21:38:03.099981 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p\" (UID: \"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p" Feb 19 21:38:03 crc kubenswrapper[4886]: I0219 21:38:03.100037 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p\" (UID: \"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p" Feb 19 21:38:03 crc kubenswrapper[4886]: I0219 21:38:03.100096 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p\" (UID: \"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p" Feb 19 21:38:03 crc kubenswrapper[4886]: I0219 21:38:03.100155 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p\" (UID: \"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p" Feb 19 21:38:03 crc kubenswrapper[4886]: I0219 21:38:03.100227 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cdrz\" (UniqueName: \"kubernetes.io/projected/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-kube-api-access-6cdrz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p\" (UID: \"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p" Feb 19 21:38:03 crc kubenswrapper[4886]: I0219 21:38:03.202085 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cdrz\" (UniqueName: \"kubernetes.io/projected/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-kube-api-access-6cdrz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p\" (UID: \"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p" Feb 19 21:38:03 crc kubenswrapper[4886]: I0219 21:38:03.202214 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p\" (UID: \"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p" Feb 19 21:38:03 crc kubenswrapper[4886]: I0219 21:38:03.202235 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p\" (UID: \"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p" Feb 19 21:38:03 crc kubenswrapper[4886]: I0219 21:38:03.202292 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p\" (UID: \"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p" Feb 19 21:38:03 crc kubenswrapper[4886]: I0219 21:38:03.202364 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p\" (UID: \"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p" Feb 19 21:38:03 crc kubenswrapper[4886]: I0219 21:38:03.206200 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p\" (UID: \"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p" Feb 19 21:38:03 crc kubenswrapper[4886]: I0219 21:38:03.207876 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p\" (UID: \"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p" Feb 19 21:38:03 crc kubenswrapper[4886]: I0219 21:38:03.207953 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p\" (UID: \"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p" Feb 19 21:38:03 crc kubenswrapper[4886]: I0219 21:38:03.210354 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p\" (UID: \"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p" Feb 19 21:38:03 crc kubenswrapper[4886]: I0219 21:38:03.221966 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cdrz\" (UniqueName: \"kubernetes.io/projected/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-kube-api-access-6cdrz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p\" (UID: \"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p" Feb 19 21:38:03 crc kubenswrapper[4886]: I0219 21:38:03.320526 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p" Feb 19 21:38:03 crc kubenswrapper[4886]: I0219 21:38:03.898402 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p"] Feb 19 21:38:04 crc kubenswrapper[4886]: I0219 21:38:04.719812 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p" event={"ID":"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8","Type":"ContainerStarted","Data":"d8f3b77bfd97452f454ab38129f7458bc76f85fb24a468f860bfcc1b7ce0b572"} Feb 19 21:38:04 crc kubenswrapper[4886]: I0219 21:38:04.721236 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p" event={"ID":"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8","Type":"ContainerStarted","Data":"4936c305a67ba2b7c8d28810ee2b391394753fb1046f710029f321b6e7d3dcee"} Feb 19 21:38:04 crc kubenswrapper[4886]: I0219 21:38:04.744004 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p" podStartSLOduration=2.298611297 podStartE2EDuration="2.74398327s" podCreationTimestamp="2026-02-19 21:38:02 +0000 UTC" firstStartedPulling="2026-02-19 21:38:03.898625265 +0000 UTC m=+2314.526468305" lastFinishedPulling="2026-02-19 21:38:04.343997218 +0000 UTC m=+2314.971840278" observedRunningTime="2026-02-19 21:38:04.737659928 +0000 UTC m=+2315.365502988" watchObservedRunningTime="2026-02-19 21:38:04.74398327 +0000 UTC m=+2315.371826330" Feb 19 21:38:10 crc kubenswrapper[4886]: I0219 21:38:10.615734 4886 scope.go:117] "RemoveContainer" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" Feb 19 21:38:10 crc kubenswrapper[4886]: E0219 21:38:10.617918 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:38:25 crc kubenswrapper[4886]: I0219 21:38:25.603476 4886 scope.go:117] "RemoveContainer" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" Feb 19 21:38:25 crc kubenswrapper[4886]: E0219 21:38:25.605143 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:38:37 crc kubenswrapper[4886]: I0219 21:38:37.600813 4886 scope.go:117] "RemoveContainer" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" Feb 19 21:38:37 crc kubenswrapper[4886]: E0219 21:38:37.601807 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:38:48 crc kubenswrapper[4886]: I0219 21:38:48.602098 4886 scope.go:117] "RemoveContainer" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" Feb 19 21:38:48 crc kubenswrapper[4886]: E0219 21:38:48.603652 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:39:00 crc kubenswrapper[4886]: I0219 21:39:00.618199 4886 scope.go:117] "RemoveContainer" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" Feb 19 21:39:00 crc kubenswrapper[4886]: E0219 21:39:00.619256 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:39:11 crc kubenswrapper[4886]: I0219 21:39:11.601993 4886 scope.go:117] "RemoveContainer" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" Feb 19 21:39:11 crc kubenswrapper[4886]: E0219 21:39:11.602817 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:39:22 crc kubenswrapper[4886]: I0219 21:39:22.602956 4886 scope.go:117] "RemoveContainer" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" Feb 19 21:39:22 crc kubenswrapper[4886]: E0219 21:39:22.604001 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:39:37 crc kubenswrapper[4886]: I0219 21:39:37.601931 4886 scope.go:117] "RemoveContainer" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" Feb 19 21:39:37 crc kubenswrapper[4886]: E0219 21:39:37.603055 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:39:52 crc kubenswrapper[4886]: I0219 21:39:52.603569 4886 scope.go:117] "RemoveContainer" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" Feb 19 21:39:52 crc kubenswrapper[4886]: E0219 21:39:52.604541 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:40:07 crc kubenswrapper[4886]: I0219 21:40:07.603059 4886 scope.go:117] "RemoveContainer" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" Feb 19 21:40:07 crc kubenswrapper[4886]: E0219 21:40:07.604319 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:40:19 crc kubenswrapper[4886]: I0219 21:40:19.603021 4886 scope.go:117] "RemoveContainer" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" Feb 19 21:40:19 crc kubenswrapper[4886]: E0219 21:40:19.605322 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:40:31 crc kubenswrapper[4886]: I0219 21:40:31.602578 4886 scope.go:117] "RemoveContainer" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" Feb 19 21:40:31 crc kubenswrapper[4886]: E0219 21:40:31.603913 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:40:46 crc kubenswrapper[4886]: I0219 21:40:46.602051 4886 scope.go:117] "RemoveContainer" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" Feb 19 21:40:46 crc kubenswrapper[4886]: E0219 21:40:46.618187 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:41:01 crc kubenswrapper[4886]: I0219 21:41:01.602201 4886 scope.go:117] "RemoveContainer" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" Feb 19 21:41:01 crc kubenswrapper[4886]: E0219 21:41:01.603086 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:41:12 crc kubenswrapper[4886]: I0219 21:41:12.602291 4886 scope.go:117] "RemoveContainer" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" Feb 19 21:41:12 crc kubenswrapper[4886]: E0219 21:41:12.603158 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:41:24 crc kubenswrapper[4886]: I0219 21:41:24.602475 4886 scope.go:117] "RemoveContainer" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" Feb 19 21:41:25 crc kubenswrapper[4886]: I0219 21:41:25.491787 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerStarted","Data":"7a16c1e2937e3aad302477d79d5b66b3962f3379994a86f24c23094445ae2c37"} Feb 19 21:41:53 crc kubenswrapper[4886]: I0219 21:41:53.852479 4886 generic.go:334] "Generic (PLEG): container finished" podID="2fee8e2d-28cf-4824-b8b2-a0da5f9954b8" containerID="d8f3b77bfd97452f454ab38129f7458bc76f85fb24a468f860bfcc1b7ce0b572" exitCode=0 Feb 19 21:41:53 crc kubenswrapper[4886]: I0219 21:41:53.852505 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p" event={"ID":"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8","Type":"ContainerDied","Data":"d8f3b77bfd97452f454ab38129f7458bc76f85fb24a468f860bfcc1b7ce0b572"} Feb 19 21:41:55 crc kubenswrapper[4886]: I0219 21:41:55.427272 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p" Feb 19 21:41:55 crc kubenswrapper[4886]: I0219 21:41:55.521612 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-inventory\") pod \"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8\" (UID: \"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8\") " Feb 19 21:41:55 crc kubenswrapper[4886]: I0219 21:41:55.521761 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cdrz\" (UniqueName: \"kubernetes.io/projected/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-kube-api-access-6cdrz\") pod \"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8\" (UID: \"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8\") " Feb 19 21:41:55 crc kubenswrapper[4886]: I0219 21:41:55.521807 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-libvirt-combined-ca-bundle\") pod \"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8\" (UID: \"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8\") " Feb 19 21:41:55 crc kubenswrapper[4886]: I0219 21:41:55.521889 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-libvirt-secret-0\") pod \"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8\" (UID: \"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8\") " Feb 19 21:41:55 crc kubenswrapper[4886]: I0219 21:41:55.521961 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-ssh-key-openstack-edpm-ipam\") pod \"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8\" (UID: \"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8\") " Feb 19 21:41:55 crc kubenswrapper[4886]: I0219 21:41:55.527242 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "2fee8e2d-28cf-4824-b8b2-a0da5f9954b8" (UID: "2fee8e2d-28cf-4824-b8b2-a0da5f9954b8"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:41:55 crc kubenswrapper[4886]: I0219 21:41:55.527696 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-kube-api-access-6cdrz" (OuterVolumeSpecName: "kube-api-access-6cdrz") pod "2fee8e2d-28cf-4824-b8b2-a0da5f9954b8" (UID: "2fee8e2d-28cf-4824-b8b2-a0da5f9954b8"). InnerVolumeSpecName "kube-api-access-6cdrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:41:55 crc kubenswrapper[4886]: I0219 21:41:55.559413 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2fee8e2d-28cf-4824-b8b2-a0da5f9954b8" (UID: "2fee8e2d-28cf-4824-b8b2-a0da5f9954b8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:41:55 crc kubenswrapper[4886]: I0219 21:41:55.569677 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-inventory" (OuterVolumeSpecName: "inventory") pod "2fee8e2d-28cf-4824-b8b2-a0da5f9954b8" (UID: "2fee8e2d-28cf-4824-b8b2-a0da5f9954b8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:41:55 crc kubenswrapper[4886]: I0219 21:41:55.570143 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "2fee8e2d-28cf-4824-b8b2-a0da5f9954b8" (UID: "2fee8e2d-28cf-4824-b8b2-a0da5f9954b8"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:41:55 crc kubenswrapper[4886]: I0219 21:41:55.625395 4886 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:41:55 crc kubenswrapper[4886]: I0219 21:41:55.625652 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 21:41:55 crc kubenswrapper[4886]: I0219 21:41:55.625671 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 21:41:55 crc kubenswrapper[4886]: I0219 21:41:55.625682 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cdrz\" (UniqueName: \"kubernetes.io/projected/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-kube-api-access-6cdrz\") on node \"crc\" DevicePath \"\"" Feb 19 21:41:55 crc kubenswrapper[4886]: I0219 21:41:55.625692 4886 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fee8e2d-28cf-4824-b8b2-a0da5f9954b8-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:41:55 crc kubenswrapper[4886]: I0219 21:41:55.885722 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p" event={"ID":"2fee8e2d-28cf-4824-b8b2-a0da5f9954b8","Type":"ContainerDied","Data":"4936c305a67ba2b7c8d28810ee2b391394753fb1046f710029f321b6e7d3dcee"} Feb 19 21:41:55 crc kubenswrapper[4886]: I0219 21:41:55.885777 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4936c305a67ba2b7c8d28810ee2b391394753fb1046f710029f321b6e7d3dcee" Feb 19 21:41:55 crc kubenswrapper[4886]: I0219 21:41:55.885817 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fkm9p" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.042983 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm"] Feb 19 21:41:56 crc kubenswrapper[4886]: E0219 21:41:56.043769 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fee8e2d-28cf-4824-b8b2-a0da5f9954b8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.043818 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fee8e2d-28cf-4824-b8b2-a0da5f9954b8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.044385 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fee8e2d-28cf-4824-b8b2-a0da5f9954b8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.045575 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.047666 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vq4ls" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.048183 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.048570 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.049618 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.049722 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.049800 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.055563 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.063567 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm"] Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.137008 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.137079 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.137154 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.137201 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.137222 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8dfp\" (UniqueName: \"kubernetes.io/projected/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-kube-api-access-d8dfp\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.137289 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.137324 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.137348 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.137385 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.137408 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.137464 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.239149 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.239554 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.239616 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.239653 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.239726 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.239758 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.239793 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.239864 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.239913 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.239939 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8dfp\" (UniqueName: \"kubernetes.io/projected/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-kube-api-access-d8dfp\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.240000 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.240986 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.243950 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.244738 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.245512 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.248748 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.249338 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.253811 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.274853 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.281288 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.295829 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.316038 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8dfp\" (UniqueName: \"kubernetes.io/projected/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-kube-api-access-d8dfp\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4bznm\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.366855 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:41:56 crc kubenswrapper[4886]: I0219 21:41:56.991422 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm"] Feb 19 21:41:57 crc kubenswrapper[4886]: I0219 21:41:57.003254 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 21:41:57 crc kubenswrapper[4886]: I0219 21:41:57.906340 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" event={"ID":"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf","Type":"ContainerStarted","Data":"8dc74072cb7a8b120e6e71f7f0cd82649597d03e718e2c9d1c93735bafbfb135"} Feb 19 21:41:57 crc kubenswrapper[4886]: I0219 21:41:57.906644 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" event={"ID":"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf","Type":"ContainerStarted","Data":"8e63cc46aecf639ab12acc5b50cbc94f4a6d0956b508259a2d7f6fa505c5190f"} Feb 19 21:41:57 crc kubenswrapper[4886]: I0219 21:41:57.928756 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" podStartSLOduration=1.491764457 podStartE2EDuration="1.928731243s" podCreationTimestamp="2026-02-19 21:41:56 +0000 UTC" firstStartedPulling="2026-02-19 21:41:57.003047906 +0000 UTC m=+2547.630890956" lastFinishedPulling="2026-02-19 21:41:57.440014692 +0000 UTC m=+2548.067857742" observedRunningTime="2026-02-19 21:41:57.924336884 +0000 UTC m=+2548.552179954" watchObservedRunningTime="2026-02-19 21:41:57.928731243 +0000 UTC m=+2548.556574333" Feb 19 21:43:48 crc kubenswrapper[4886]: I0219 21:43:48.324638 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:43:48 crc kubenswrapper[4886]: I0219 21:43:48.325335 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:44:15 crc kubenswrapper[4886]: I0219 21:44:15.438649 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cn5f4"] Feb 19 21:44:15 crc kubenswrapper[4886]: I0219 21:44:15.442043 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cn5f4" Feb 19 21:44:15 crc kubenswrapper[4886]: I0219 21:44:15.455413 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cn5f4"] Feb 19 21:44:15 crc kubenswrapper[4886]: I0219 21:44:15.622308 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p5qw\" (UniqueName: \"kubernetes.io/projected/bb68c479-9456-48d9-8687-b059d838d9fe-kube-api-access-4p5qw\") pod \"redhat-operators-cn5f4\" (UID: \"bb68c479-9456-48d9-8687-b059d838d9fe\") " pod="openshift-marketplace/redhat-operators-cn5f4" Feb 19 21:44:15 crc kubenswrapper[4886]: I0219 21:44:15.623447 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb68c479-9456-48d9-8687-b059d838d9fe-utilities\") pod \"redhat-operators-cn5f4\" (UID: \"bb68c479-9456-48d9-8687-b059d838d9fe\") " pod="openshift-marketplace/redhat-operators-cn5f4" Feb 19 21:44:15 crc kubenswrapper[4886]: I0219 21:44:15.624108 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb68c479-9456-48d9-8687-b059d838d9fe-catalog-content\") pod \"redhat-operators-cn5f4\" (UID: \"bb68c479-9456-48d9-8687-b059d838d9fe\") " pod="openshift-marketplace/redhat-operators-cn5f4" Feb 19 21:44:15 crc kubenswrapper[4886]: I0219 21:44:15.726773 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb68c479-9456-48d9-8687-b059d838d9fe-catalog-content\") pod \"redhat-operators-cn5f4\" (UID: \"bb68c479-9456-48d9-8687-b059d838d9fe\") " pod="openshift-marketplace/redhat-operators-cn5f4" Feb 19 21:44:15 crc kubenswrapper[4886]: I0219 21:44:15.726907 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p5qw\" (UniqueName: \"kubernetes.io/projected/bb68c479-9456-48d9-8687-b059d838d9fe-kube-api-access-4p5qw\") pod \"redhat-operators-cn5f4\" (UID: \"bb68c479-9456-48d9-8687-b059d838d9fe\") " pod="openshift-marketplace/redhat-operators-cn5f4" Feb 19 21:44:15 crc kubenswrapper[4886]: I0219 21:44:15.726959 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb68c479-9456-48d9-8687-b059d838d9fe-utilities\") pod \"redhat-operators-cn5f4\" (UID: \"bb68c479-9456-48d9-8687-b059d838d9fe\") " pod="openshift-marketplace/redhat-operators-cn5f4" Feb 19 21:44:15 crc kubenswrapper[4886]: I0219 21:44:15.727406 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb68c479-9456-48d9-8687-b059d838d9fe-catalog-content\") pod \"redhat-operators-cn5f4\" (UID: \"bb68c479-9456-48d9-8687-b059d838d9fe\") " pod="openshift-marketplace/redhat-operators-cn5f4" Feb 19 21:44:15 crc kubenswrapper[4886]: I0219 21:44:15.727470 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb68c479-9456-48d9-8687-b059d838d9fe-utilities\") pod \"redhat-operators-cn5f4\" (UID: \"bb68c479-9456-48d9-8687-b059d838d9fe\") " pod="openshift-marketplace/redhat-operators-cn5f4" Feb 19 21:44:15 crc kubenswrapper[4886]: I0219 21:44:15.749786 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p5qw\" (UniqueName: \"kubernetes.io/projected/bb68c479-9456-48d9-8687-b059d838d9fe-kube-api-access-4p5qw\") pod \"redhat-operators-cn5f4\" (UID: \"bb68c479-9456-48d9-8687-b059d838d9fe\") " pod="openshift-marketplace/redhat-operators-cn5f4" Feb 19 21:44:15 crc kubenswrapper[4886]: I0219 21:44:15.763497 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cn5f4" Feb 19 21:44:16 crc kubenswrapper[4886]: I0219 21:44:16.235101 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cn5f4"] Feb 19 21:44:16 crc kubenswrapper[4886]: I0219 21:44:16.745511 4886 generic.go:334] "Generic (PLEG): container finished" podID="bb68c479-9456-48d9-8687-b059d838d9fe" containerID="b0d5352c31e816833b6338c93c1e32d859031d3903547bd68bf4c80ea2ca29cf" exitCode=0 Feb 19 21:44:16 crc kubenswrapper[4886]: I0219 21:44:16.745796 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cn5f4" event={"ID":"bb68c479-9456-48d9-8687-b059d838d9fe","Type":"ContainerDied","Data":"b0d5352c31e816833b6338c93c1e32d859031d3903547bd68bf4c80ea2ca29cf"} Feb 19 21:44:16 crc kubenswrapper[4886]: I0219 21:44:16.745853 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cn5f4" event={"ID":"bb68c479-9456-48d9-8687-b059d838d9fe","Type":"ContainerStarted","Data":"3fbec7c69c2b7fd1c93482a12d2577324346a6d24ada5fa1908f035803a8b307"} Feb 19 21:44:17 crc kubenswrapper[4886]: I0219 21:44:17.760339 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cn5f4" event={"ID":"bb68c479-9456-48d9-8687-b059d838d9fe","Type":"ContainerStarted","Data":"06fcc0738baf664af917449fc2ac369421bcfd10122cdda20107f07bd74d93e0"} Feb 19 21:44:18 crc kubenswrapper[4886]: I0219 21:44:18.324498 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:44:18 crc kubenswrapper[4886]: I0219 21:44:18.324576 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:44:18 crc kubenswrapper[4886]: I0219 21:44:18.775043 4886 generic.go:334] "Generic (PLEG): container finished" podID="5aaeb0ac-2ca0-4416-8d49-3722e529f1cf" containerID="8dc74072cb7a8b120e6e71f7f0cd82649597d03e718e2c9d1c93735bafbfb135" exitCode=0 Feb 19 21:44:18 crc kubenswrapper[4886]: I0219 21:44:18.775176 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" event={"ID":"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf","Type":"ContainerDied","Data":"8dc74072cb7a8b120e6e71f7f0cd82649597d03e718e2c9d1c93735bafbfb135"} Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.360244 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.548963 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-migration-ssh-key-0\") pod \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.549022 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-cell1-compute-config-3\") pod \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.549081 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-ssh-key-openstack-edpm-ipam\") pod \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.549144 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-cell1-compute-config-2\") pod \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.549177 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8dfp\" (UniqueName: \"kubernetes.io/projected/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-kube-api-access-d8dfp\") pod \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.549246 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-inventory\") pod \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.549364 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-combined-ca-bundle\") pod \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.549456 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-cell1-compute-config-1\") pod \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.549523 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-cell1-compute-config-0\") pod \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.549561 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-extra-config-0\") pod \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.549653 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-migration-ssh-key-1\") pod \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\" (UID: \"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf\") " Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.561454 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "5aaeb0ac-2ca0-4416-8d49-3722e529f1cf" (UID: "5aaeb0ac-2ca0-4416-8d49-3722e529f1cf"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.567871 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-kube-api-access-d8dfp" (OuterVolumeSpecName: "kube-api-access-d8dfp") pod "5aaeb0ac-2ca0-4416-8d49-3722e529f1cf" (UID: "5aaeb0ac-2ca0-4416-8d49-3722e529f1cf"). InnerVolumeSpecName "kube-api-access-d8dfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.591184 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "5aaeb0ac-2ca0-4416-8d49-3722e529f1cf" (UID: "5aaeb0ac-2ca0-4416-8d49-3722e529f1cf"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.600043 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "5aaeb0ac-2ca0-4416-8d49-3722e529f1cf" (UID: "5aaeb0ac-2ca0-4416-8d49-3722e529f1cf"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.605584 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "5aaeb0ac-2ca0-4416-8d49-3722e529f1cf" (UID: "5aaeb0ac-2ca0-4416-8d49-3722e529f1cf"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.606440 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-inventory" (OuterVolumeSpecName: "inventory") pod "5aaeb0ac-2ca0-4416-8d49-3722e529f1cf" (UID: "5aaeb0ac-2ca0-4416-8d49-3722e529f1cf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.610544 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "5aaeb0ac-2ca0-4416-8d49-3722e529f1cf" (UID: "5aaeb0ac-2ca0-4416-8d49-3722e529f1cf"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.616229 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5aaeb0ac-2ca0-4416-8d49-3722e529f1cf" (UID: "5aaeb0ac-2ca0-4416-8d49-3722e529f1cf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.616354 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "5aaeb0ac-2ca0-4416-8d49-3722e529f1cf" (UID: "5aaeb0ac-2ca0-4416-8d49-3722e529f1cf"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.621106 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "5aaeb0ac-2ca0-4416-8d49-3722e529f1cf" (UID: "5aaeb0ac-2ca0-4416-8d49-3722e529f1cf"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.621499 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "5aaeb0ac-2ca0-4416-8d49-3722e529f1cf" (UID: "5aaeb0ac-2ca0-4416-8d49-3722e529f1cf"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.653005 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.653050 4886 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.653065 4886 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.653078 4886 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.653090 4886 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.653102 4886 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.653113 4886 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.653125 4886 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.653138 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.653149 4886 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.653160 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8dfp\" (UniqueName: \"kubernetes.io/projected/5aaeb0ac-2ca0-4416-8d49-3722e529f1cf-kube-api-access-d8dfp\") on node \"crc\" DevicePath \"\"" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.843006 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" event={"ID":"5aaeb0ac-2ca0-4416-8d49-3722e529f1cf","Type":"ContainerDied","Data":"8e63cc46aecf639ab12acc5b50cbc94f4a6d0956b508259a2d7f6fa505c5190f"} Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.843063 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4bznm" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.843063 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e63cc46aecf639ab12acc5b50cbc94f4a6d0956b508259a2d7f6fa505c5190f" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.924855 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf"] Feb 19 21:44:20 crc kubenswrapper[4886]: E0219 21:44:20.925464 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aaeb0ac-2ca0-4416-8d49-3722e529f1cf" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.925487 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aaeb0ac-2ca0-4416-8d49-3722e529f1cf" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.925784 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aaeb0ac-2ca0-4416-8d49-3722e529f1cf" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.926792 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.930538 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.930766 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.930997 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.931149 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.941903 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vq4ls" Feb 19 21:44:20 crc kubenswrapper[4886]: I0219 21:44:20.947657 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf"] Feb 19 21:44:21 crc kubenswrapper[4886]: I0219 21:44:21.062147 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbzmp\" (UniqueName: \"kubernetes.io/projected/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-kube-api-access-zbzmp\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" Feb 19 21:44:21 crc kubenswrapper[4886]: I0219 21:44:21.062484 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" Feb 19 21:44:21 crc kubenswrapper[4886]: I0219 21:44:21.062608 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" Feb 19 21:44:21 crc kubenswrapper[4886]: I0219 21:44:21.062697 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" Feb 19 21:44:21 crc kubenswrapper[4886]: I0219 21:44:21.062770 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" Feb 19 21:44:21 crc kubenswrapper[4886]: I0219 21:44:21.062838 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" Feb 19 21:44:21 crc kubenswrapper[4886]: I0219 21:44:21.063000 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" Feb 19 21:44:21 crc kubenswrapper[4886]: I0219 21:44:21.165989 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbzmp\" (UniqueName: \"kubernetes.io/projected/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-kube-api-access-zbzmp\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" Feb 19 21:44:21 crc kubenswrapper[4886]: I0219 21:44:21.166120 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" Feb 19 21:44:21 crc kubenswrapper[4886]: I0219 21:44:21.166178 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" Feb 19 21:44:21 crc kubenswrapper[4886]: I0219 21:44:21.166250 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" Feb 19 21:44:21 crc kubenswrapper[4886]: I0219 21:44:21.166311 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" Feb 19 21:44:21 crc kubenswrapper[4886]: I0219 21:44:21.166344 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" Feb 19 21:44:21 crc kubenswrapper[4886]: I0219 21:44:21.166509 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" Feb 19 21:44:21 crc kubenswrapper[4886]: I0219 21:44:21.171794 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" Feb 19 21:44:21 crc kubenswrapper[4886]: I0219 21:44:21.171924 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" Feb 19 21:44:21 crc kubenswrapper[4886]: I0219 21:44:21.175367 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" Feb 19 21:44:21 crc kubenswrapper[4886]: I0219 21:44:21.176502 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" Feb 19 21:44:21 crc kubenswrapper[4886]: I0219 21:44:21.177079 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" Feb 19 21:44:21 crc kubenswrapper[4886]: I0219 21:44:21.177257 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" Feb 19 21:44:21 crc kubenswrapper[4886]: I0219 21:44:21.205545 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbzmp\" (UniqueName: \"kubernetes.io/projected/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-kube-api-access-zbzmp\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" Feb 19 21:44:21 crc kubenswrapper[4886]: I0219 21:44:21.247919 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" Feb 19 21:44:21 crc kubenswrapper[4886]: W0219 21:44:21.670248 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f5b2a4b_7bdc_434a_a90d_a3edb13e0faf.slice/crio-14bd8eab245eb86ec11991c247e80ce395cf3c78ca495bc5430c459133930227 WatchSource:0}: Error finding container 14bd8eab245eb86ec11991c247e80ce395cf3c78ca495bc5430c459133930227: Status 404 returned error can't find the container with id 14bd8eab245eb86ec11991c247e80ce395cf3c78ca495bc5430c459133930227 Feb 19 21:44:21 crc kubenswrapper[4886]: I0219 21:44:21.670409 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf"] Feb 19 21:44:21 crc kubenswrapper[4886]: I0219 21:44:21.853565 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" event={"ID":"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf","Type":"ContainerStarted","Data":"14bd8eab245eb86ec11991c247e80ce395cf3c78ca495bc5430c459133930227"} Feb 19 21:44:22 crc kubenswrapper[4886]: I0219 21:44:22.865783 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" event={"ID":"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf","Type":"ContainerStarted","Data":"5a252e5e1f8d21a741cf730cb409c84d53febe839e725990f226a06d320543a7"} Feb 19 21:44:22 crc kubenswrapper[4886]: I0219 21:44:22.868584 4886 generic.go:334] "Generic (PLEG): container finished" podID="bb68c479-9456-48d9-8687-b059d838d9fe" containerID="06fcc0738baf664af917449fc2ac369421bcfd10122cdda20107f07bd74d93e0" exitCode=0 Feb 19 21:44:22 crc kubenswrapper[4886]: I0219 21:44:22.868610 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cn5f4" event={"ID":"bb68c479-9456-48d9-8687-b059d838d9fe","Type":"ContainerDied","Data":"06fcc0738baf664af917449fc2ac369421bcfd10122cdda20107f07bd74d93e0"} Feb 19 21:44:22 crc kubenswrapper[4886]: I0219 21:44:22.892368 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" podStartSLOduration=2.39674516 podStartE2EDuration="2.892326961s" podCreationTimestamp="2026-02-19 21:44:20 +0000 UTC" firstStartedPulling="2026-02-19 21:44:21.672582611 +0000 UTC m=+2692.300425661" lastFinishedPulling="2026-02-19 21:44:22.168164392 +0000 UTC m=+2692.796007462" observedRunningTime="2026-02-19 21:44:22.884882657 +0000 UTC m=+2693.512725717" watchObservedRunningTime="2026-02-19 21:44:22.892326961 +0000 UTC m=+2693.520170011" Feb 19 21:44:23 crc kubenswrapper[4886]: I0219 21:44:23.884974 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cn5f4" event={"ID":"bb68c479-9456-48d9-8687-b059d838d9fe","Type":"ContainerStarted","Data":"837a7c09b0c9387dd2fd18ffd9c57abee2024b6a955da43a00fa76b7b849d383"} Feb 19 21:44:23 crc kubenswrapper[4886]: I0219 21:44:23.917171 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cn5f4" podStartSLOduration=2.435040991 podStartE2EDuration="8.917146596s" podCreationTimestamp="2026-02-19 21:44:15 +0000 UTC" firstStartedPulling="2026-02-19 21:44:16.750206427 +0000 UTC m=+2687.378049467" lastFinishedPulling="2026-02-19 21:44:23.232312022 +0000 UTC m=+2693.860155072" observedRunningTime="2026-02-19 21:44:23.910716646 +0000 UTC m=+2694.538559746" watchObservedRunningTime="2026-02-19 21:44:23.917146596 +0000 UTC m=+2694.544989686" Feb 19 21:44:25 crc kubenswrapper[4886]: I0219 21:44:25.763742 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cn5f4" Feb 19 21:44:25 crc kubenswrapper[4886]: I0219 21:44:25.763836 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cn5f4" Feb 19 21:44:26 crc kubenswrapper[4886]: I0219 21:44:26.826245 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cn5f4" podUID="bb68c479-9456-48d9-8687-b059d838d9fe" containerName="registry-server" probeResult="failure" output=< Feb 19 21:44:26 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 21:44:26 crc kubenswrapper[4886]: > Feb 19 21:44:36 crc kubenswrapper[4886]: I0219 21:44:36.817785 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cn5f4" podUID="bb68c479-9456-48d9-8687-b059d838d9fe" containerName="registry-server" probeResult="failure" output=< Feb 19 21:44:36 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 21:44:36 crc kubenswrapper[4886]: > Feb 19 21:44:45 crc kubenswrapper[4886]: I0219 21:44:45.821220 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cn5f4" Feb 19 21:44:45 crc kubenswrapper[4886]: I0219 21:44:45.870002 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cn5f4" Feb 19 21:44:46 crc kubenswrapper[4886]: I0219 21:44:46.632396 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cn5f4"] Feb 19 21:44:47 crc kubenswrapper[4886]: I0219 21:44:47.123573 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cn5f4" podUID="bb68c479-9456-48d9-8687-b059d838d9fe" containerName="registry-server" containerID="cri-o://837a7c09b0c9387dd2fd18ffd9c57abee2024b6a955da43a00fa76b7b849d383" gracePeriod=2 Feb 19 21:44:47 crc kubenswrapper[4886]: I0219 21:44:47.794235 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cn5f4" Feb 19 21:44:47 crc kubenswrapper[4886]: I0219 21:44:47.911760 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb68c479-9456-48d9-8687-b059d838d9fe-catalog-content\") pod \"bb68c479-9456-48d9-8687-b059d838d9fe\" (UID: \"bb68c479-9456-48d9-8687-b059d838d9fe\") " Feb 19 21:44:47 crc kubenswrapper[4886]: I0219 21:44:47.911899 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb68c479-9456-48d9-8687-b059d838d9fe-utilities\") pod \"bb68c479-9456-48d9-8687-b059d838d9fe\" (UID: \"bb68c479-9456-48d9-8687-b059d838d9fe\") " Feb 19 21:44:47 crc kubenswrapper[4886]: I0219 21:44:47.912004 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p5qw\" (UniqueName: \"kubernetes.io/projected/bb68c479-9456-48d9-8687-b059d838d9fe-kube-api-access-4p5qw\") pod \"bb68c479-9456-48d9-8687-b059d838d9fe\" (UID: \"bb68c479-9456-48d9-8687-b059d838d9fe\") " Feb 19 21:44:47 crc kubenswrapper[4886]: I0219 21:44:47.913415 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb68c479-9456-48d9-8687-b059d838d9fe-utilities" (OuterVolumeSpecName: "utilities") pod "bb68c479-9456-48d9-8687-b059d838d9fe" (UID: "bb68c479-9456-48d9-8687-b059d838d9fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:44:47 crc kubenswrapper[4886]: I0219 21:44:47.920377 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb68c479-9456-48d9-8687-b059d838d9fe-kube-api-access-4p5qw" (OuterVolumeSpecName: "kube-api-access-4p5qw") pod "bb68c479-9456-48d9-8687-b059d838d9fe" (UID: "bb68c479-9456-48d9-8687-b059d838d9fe"). InnerVolumeSpecName "kube-api-access-4p5qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:44:48 crc kubenswrapper[4886]: I0219 21:44:48.014729 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb68c479-9456-48d9-8687-b059d838d9fe-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:44:48 crc kubenswrapper[4886]: I0219 21:44:48.014768 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p5qw\" (UniqueName: \"kubernetes.io/projected/bb68c479-9456-48d9-8687-b059d838d9fe-kube-api-access-4p5qw\") on node \"crc\" DevicePath \"\"" Feb 19 21:44:48 crc kubenswrapper[4886]: I0219 21:44:48.045495 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb68c479-9456-48d9-8687-b059d838d9fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb68c479-9456-48d9-8687-b059d838d9fe" (UID: "bb68c479-9456-48d9-8687-b059d838d9fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:44:48 crc kubenswrapper[4886]: I0219 21:44:48.116586 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb68c479-9456-48d9-8687-b059d838d9fe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:44:48 crc kubenswrapper[4886]: I0219 21:44:48.140930 4886 generic.go:334] "Generic (PLEG): container finished" podID="bb68c479-9456-48d9-8687-b059d838d9fe" containerID="837a7c09b0c9387dd2fd18ffd9c57abee2024b6a955da43a00fa76b7b849d383" exitCode=0 Feb 19 21:44:48 crc kubenswrapper[4886]: I0219 21:44:48.141015 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cn5f4" Feb 19 21:44:48 crc kubenswrapper[4886]: I0219 21:44:48.141995 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cn5f4" event={"ID":"bb68c479-9456-48d9-8687-b059d838d9fe","Type":"ContainerDied","Data":"837a7c09b0c9387dd2fd18ffd9c57abee2024b6a955da43a00fa76b7b849d383"} Feb 19 21:44:48 crc kubenswrapper[4886]: I0219 21:44:48.142189 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cn5f4" event={"ID":"bb68c479-9456-48d9-8687-b059d838d9fe","Type":"ContainerDied","Data":"3fbec7c69c2b7fd1c93482a12d2577324346a6d24ada5fa1908f035803a8b307"} Feb 19 21:44:48 crc kubenswrapper[4886]: I0219 21:44:48.142420 4886 scope.go:117] "RemoveContainer" containerID="837a7c09b0c9387dd2fd18ffd9c57abee2024b6a955da43a00fa76b7b849d383" Feb 19 21:44:48 crc kubenswrapper[4886]: I0219 21:44:48.183677 4886 scope.go:117] "RemoveContainer" containerID="06fcc0738baf664af917449fc2ac369421bcfd10122cdda20107f07bd74d93e0" Feb 19 21:44:48 crc kubenswrapper[4886]: I0219 21:44:48.194019 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cn5f4"] Feb 19 21:44:48 crc kubenswrapper[4886]: I0219 21:44:48.205751 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cn5f4"] Feb 19 21:44:48 crc kubenswrapper[4886]: I0219 21:44:48.212030 4886 scope.go:117] "RemoveContainer" containerID="b0d5352c31e816833b6338c93c1e32d859031d3903547bd68bf4c80ea2ca29cf" Feb 19 21:44:48 crc kubenswrapper[4886]: E0219 21:44:48.268180 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb68c479_9456_48d9_8687_b059d838d9fe.slice/crio-3fbec7c69c2b7fd1c93482a12d2577324346a6d24ada5fa1908f035803a8b307\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb68c479_9456_48d9_8687_b059d838d9fe.slice\": RecentStats: unable to find data in memory cache]" Feb 19 21:44:48 crc kubenswrapper[4886]: E0219 21:44:48.268294 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb68c479_9456_48d9_8687_b059d838d9fe.slice\": RecentStats: unable to find data in memory cache]" Feb 19 21:44:48 crc kubenswrapper[4886]: I0219 21:44:48.277370 4886 scope.go:117] "RemoveContainer" containerID="837a7c09b0c9387dd2fd18ffd9c57abee2024b6a955da43a00fa76b7b849d383" Feb 19 21:44:48 crc kubenswrapper[4886]: E0219 21:44:48.286525 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"837a7c09b0c9387dd2fd18ffd9c57abee2024b6a955da43a00fa76b7b849d383\": container with ID starting with 837a7c09b0c9387dd2fd18ffd9c57abee2024b6a955da43a00fa76b7b849d383 not found: ID does not exist" containerID="837a7c09b0c9387dd2fd18ffd9c57abee2024b6a955da43a00fa76b7b849d383" Feb 19 21:44:48 crc kubenswrapper[4886]: I0219 21:44:48.286576 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"837a7c09b0c9387dd2fd18ffd9c57abee2024b6a955da43a00fa76b7b849d383"} err="failed to get container status \"837a7c09b0c9387dd2fd18ffd9c57abee2024b6a955da43a00fa76b7b849d383\": rpc error: code = NotFound desc = could not find container \"837a7c09b0c9387dd2fd18ffd9c57abee2024b6a955da43a00fa76b7b849d383\": container with ID starting with 837a7c09b0c9387dd2fd18ffd9c57abee2024b6a955da43a00fa76b7b849d383 not found: ID does not exist" Feb 19 21:44:48 crc kubenswrapper[4886]: I0219 21:44:48.286609 4886 scope.go:117] "RemoveContainer" containerID="06fcc0738baf664af917449fc2ac369421bcfd10122cdda20107f07bd74d93e0" Feb 19 21:44:48 crc kubenswrapper[4886]: E0219 21:44:48.289872 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06fcc0738baf664af917449fc2ac369421bcfd10122cdda20107f07bd74d93e0\": container with ID starting with 06fcc0738baf664af917449fc2ac369421bcfd10122cdda20107f07bd74d93e0 not found: ID does not exist" containerID="06fcc0738baf664af917449fc2ac369421bcfd10122cdda20107f07bd74d93e0" Feb 19 21:44:48 crc kubenswrapper[4886]: I0219 21:44:48.289899 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06fcc0738baf664af917449fc2ac369421bcfd10122cdda20107f07bd74d93e0"} err="failed to get container status \"06fcc0738baf664af917449fc2ac369421bcfd10122cdda20107f07bd74d93e0\": rpc error: code = NotFound desc = could not find container \"06fcc0738baf664af917449fc2ac369421bcfd10122cdda20107f07bd74d93e0\": container with ID starting with 06fcc0738baf664af917449fc2ac369421bcfd10122cdda20107f07bd74d93e0 not found: ID does not exist" Feb 19 21:44:48 crc kubenswrapper[4886]: I0219 21:44:48.289922 4886 scope.go:117] "RemoveContainer" containerID="b0d5352c31e816833b6338c93c1e32d859031d3903547bd68bf4c80ea2ca29cf" Feb 19 21:44:48 crc kubenswrapper[4886]: E0219 21:44:48.290142 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0d5352c31e816833b6338c93c1e32d859031d3903547bd68bf4c80ea2ca29cf\": container with ID starting with b0d5352c31e816833b6338c93c1e32d859031d3903547bd68bf4c80ea2ca29cf not found: ID does not exist" containerID="b0d5352c31e816833b6338c93c1e32d859031d3903547bd68bf4c80ea2ca29cf" Feb 19 21:44:48 crc kubenswrapper[4886]: I0219 21:44:48.290181 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0d5352c31e816833b6338c93c1e32d859031d3903547bd68bf4c80ea2ca29cf"} err="failed to get container status \"b0d5352c31e816833b6338c93c1e32d859031d3903547bd68bf4c80ea2ca29cf\": rpc error: code = NotFound desc = could not find container \"b0d5352c31e816833b6338c93c1e32d859031d3903547bd68bf4c80ea2ca29cf\": container with ID starting with b0d5352c31e816833b6338c93c1e32d859031d3903547bd68bf4c80ea2ca29cf not found: ID does not exist" Feb 19 21:44:48 crc kubenswrapper[4886]: I0219 21:44:48.324329 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:44:48 crc kubenswrapper[4886]: I0219 21:44:48.324535 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:44:48 crc kubenswrapper[4886]: I0219 21:44:48.324661 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 21:44:48 crc kubenswrapper[4886]: I0219 21:44:48.325312 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7a16c1e2937e3aad302477d79d5b66b3962f3379994a86f24c23094445ae2c37"} pod="openshift-machine-config-operator/machine-config-daemon-6stm5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 21:44:48 crc kubenswrapper[4886]: I0219 21:44:48.325444 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" containerID="cri-o://7a16c1e2937e3aad302477d79d5b66b3962f3379994a86f24c23094445ae2c37" gracePeriod=600 Feb 19 21:44:48 crc kubenswrapper[4886]: I0219 21:44:48.615738 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb68c479-9456-48d9-8687-b059d838d9fe" path="/var/lib/kubelet/pods/bb68c479-9456-48d9-8687-b059d838d9fe/volumes" Feb 19 21:44:49 crc kubenswrapper[4886]: I0219 21:44:49.162711 4886 generic.go:334] "Generic (PLEG): container finished" podID="b096c32d-4192-4529-bc55-b05d09004007" containerID="7a16c1e2937e3aad302477d79d5b66b3962f3379994a86f24c23094445ae2c37" exitCode=0 Feb 19 21:44:49 crc kubenswrapper[4886]: I0219 21:44:49.162761 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerDied","Data":"7a16c1e2937e3aad302477d79d5b66b3962f3379994a86f24c23094445ae2c37"} Feb 19 21:44:49 crc kubenswrapper[4886]: I0219 21:44:49.162791 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerStarted","Data":"5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed"} Feb 19 21:44:49 crc kubenswrapper[4886]: I0219 21:44:49.162813 4886 scope.go:117] "RemoveContainer" containerID="25834097c29a5bf974e11a15d7259e7a0ab8d2390e378221557b5e649d28e795" Feb 19 21:45:00 crc kubenswrapper[4886]: I0219 21:45:00.156765 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525625-fdxr5"] Feb 19 21:45:00 crc kubenswrapper[4886]: E0219 21:45:00.157941 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb68c479-9456-48d9-8687-b059d838d9fe" containerName="extract-utilities" Feb 19 21:45:00 crc kubenswrapper[4886]: I0219 21:45:00.157961 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb68c479-9456-48d9-8687-b059d838d9fe" containerName="extract-utilities" Feb 19 21:45:00 crc kubenswrapper[4886]: E0219 21:45:00.157973 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb68c479-9456-48d9-8687-b059d838d9fe" containerName="extract-content" Feb 19 21:45:00 crc kubenswrapper[4886]: I0219 21:45:00.157980 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb68c479-9456-48d9-8687-b059d838d9fe" containerName="extract-content" Feb 19 21:45:00 crc kubenswrapper[4886]: E0219 21:45:00.158027 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb68c479-9456-48d9-8687-b059d838d9fe" containerName="registry-server" Feb 19 21:45:00 crc kubenswrapper[4886]: I0219 21:45:00.158035 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb68c479-9456-48d9-8687-b059d838d9fe" containerName="registry-server" Feb 19 21:45:00 crc kubenswrapper[4886]: I0219 21:45:00.158303 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb68c479-9456-48d9-8687-b059d838d9fe" containerName="registry-server" Feb 19 21:45:00 crc kubenswrapper[4886]: I0219 21:45:00.159237 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525625-fdxr5" Feb 19 21:45:00 crc kubenswrapper[4886]: I0219 21:45:00.161123 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 21:45:00 crc kubenswrapper[4886]: I0219 21:45:00.161510 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 21:45:00 crc kubenswrapper[4886]: I0219 21:45:00.171420 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525625-fdxr5"] Feb 19 21:45:00 crc kubenswrapper[4886]: I0219 21:45:00.340706 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0469dd08-503b-4d55-9834-6dbfb58cae71-config-volume\") pod \"collect-profiles-29525625-fdxr5\" (UID: \"0469dd08-503b-4d55-9834-6dbfb58cae71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525625-fdxr5" Feb 19 21:45:00 crc kubenswrapper[4886]: I0219 21:45:00.341184 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0469dd08-503b-4d55-9834-6dbfb58cae71-secret-volume\") pod \"collect-profiles-29525625-fdxr5\" (UID: \"0469dd08-503b-4d55-9834-6dbfb58cae71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525625-fdxr5" Feb 19 21:45:00 crc kubenswrapper[4886]: I0219 21:45:00.341368 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swkdt\" (UniqueName: \"kubernetes.io/projected/0469dd08-503b-4d55-9834-6dbfb58cae71-kube-api-access-swkdt\") pod \"collect-profiles-29525625-fdxr5\" (UID: \"0469dd08-503b-4d55-9834-6dbfb58cae71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525625-fdxr5" Feb 19 21:45:00 crc kubenswrapper[4886]: I0219 21:45:00.443678 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swkdt\" (UniqueName: \"kubernetes.io/projected/0469dd08-503b-4d55-9834-6dbfb58cae71-kube-api-access-swkdt\") pod \"collect-profiles-29525625-fdxr5\" (UID: \"0469dd08-503b-4d55-9834-6dbfb58cae71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525625-fdxr5" Feb 19 21:45:00 crc kubenswrapper[4886]: I0219 21:45:00.444244 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0469dd08-503b-4d55-9834-6dbfb58cae71-config-volume\") pod \"collect-profiles-29525625-fdxr5\" (UID: \"0469dd08-503b-4d55-9834-6dbfb58cae71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525625-fdxr5" Feb 19 21:45:00 crc kubenswrapper[4886]: I0219 21:45:00.444380 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0469dd08-503b-4d55-9834-6dbfb58cae71-secret-volume\") pod \"collect-profiles-29525625-fdxr5\" (UID: \"0469dd08-503b-4d55-9834-6dbfb58cae71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525625-fdxr5" Feb 19 21:45:00 crc kubenswrapper[4886]: I0219 21:45:00.445215 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0469dd08-503b-4d55-9834-6dbfb58cae71-config-volume\") pod \"collect-profiles-29525625-fdxr5\" (UID: \"0469dd08-503b-4d55-9834-6dbfb58cae71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525625-fdxr5" Feb 19 21:45:00 crc kubenswrapper[4886]: I0219 21:45:00.451488 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0469dd08-503b-4d55-9834-6dbfb58cae71-secret-volume\") pod \"collect-profiles-29525625-fdxr5\" (UID: \"0469dd08-503b-4d55-9834-6dbfb58cae71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525625-fdxr5" Feb 19 21:45:00 crc kubenswrapper[4886]: I0219 21:45:00.465904 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swkdt\" (UniqueName: \"kubernetes.io/projected/0469dd08-503b-4d55-9834-6dbfb58cae71-kube-api-access-swkdt\") pod \"collect-profiles-29525625-fdxr5\" (UID: \"0469dd08-503b-4d55-9834-6dbfb58cae71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525625-fdxr5" Feb 19 21:45:00 crc kubenswrapper[4886]: I0219 21:45:00.491093 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525625-fdxr5" Feb 19 21:45:00 crc kubenswrapper[4886]: I0219 21:45:00.939866 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525625-fdxr5"] Feb 19 21:45:01 crc kubenswrapper[4886]: I0219 21:45:01.306603 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525625-fdxr5" event={"ID":"0469dd08-503b-4d55-9834-6dbfb58cae71","Type":"ContainerStarted","Data":"a5ccdc8cbaa16f2879a523a250719acd78c9c7f5e1109dcf16bdec8ab9a5929f"} Feb 19 21:45:01 crc kubenswrapper[4886]: I0219 21:45:01.306661 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525625-fdxr5" event={"ID":"0469dd08-503b-4d55-9834-6dbfb58cae71","Type":"ContainerStarted","Data":"4eaadfbfe20803c23d554777726c65a5615d69c9bccf31ce7e4e67ef76916af4"} Feb 19 21:45:01 crc kubenswrapper[4886]: I0219 21:45:01.328385 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29525625-fdxr5" podStartSLOduration=1.328368457 podStartE2EDuration="1.328368457s" podCreationTimestamp="2026-02-19 21:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:45:01.320448821 +0000 UTC m=+2731.948291881" watchObservedRunningTime="2026-02-19 21:45:01.328368457 +0000 UTC m=+2731.956211507" Feb 19 21:45:02 crc kubenswrapper[4886]: I0219 21:45:02.319052 4886 generic.go:334] "Generic (PLEG): container finished" podID="0469dd08-503b-4d55-9834-6dbfb58cae71" containerID="a5ccdc8cbaa16f2879a523a250719acd78c9c7f5e1109dcf16bdec8ab9a5929f" exitCode=0 Feb 19 21:45:02 crc kubenswrapper[4886]: I0219 21:45:02.319114 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525625-fdxr5" event={"ID":"0469dd08-503b-4d55-9834-6dbfb58cae71","Type":"ContainerDied","Data":"a5ccdc8cbaa16f2879a523a250719acd78c9c7f5e1109dcf16bdec8ab9a5929f"} Feb 19 21:45:03 crc kubenswrapper[4886]: I0219 21:45:03.807664 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525625-fdxr5" Feb 19 21:45:03 crc kubenswrapper[4886]: I0219 21:45:03.932398 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0469dd08-503b-4d55-9834-6dbfb58cae71-config-volume\") pod \"0469dd08-503b-4d55-9834-6dbfb58cae71\" (UID: \"0469dd08-503b-4d55-9834-6dbfb58cae71\") " Feb 19 21:45:03 crc kubenswrapper[4886]: I0219 21:45:03.932509 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0469dd08-503b-4d55-9834-6dbfb58cae71-secret-volume\") pod \"0469dd08-503b-4d55-9834-6dbfb58cae71\" (UID: \"0469dd08-503b-4d55-9834-6dbfb58cae71\") " Feb 19 21:45:03 crc kubenswrapper[4886]: I0219 21:45:03.932883 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swkdt\" (UniqueName: \"kubernetes.io/projected/0469dd08-503b-4d55-9834-6dbfb58cae71-kube-api-access-swkdt\") pod \"0469dd08-503b-4d55-9834-6dbfb58cae71\" (UID: \"0469dd08-503b-4d55-9834-6dbfb58cae71\") " Feb 19 21:45:03 crc kubenswrapper[4886]: I0219 21:45:03.933472 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0469dd08-503b-4d55-9834-6dbfb58cae71-config-volume" (OuterVolumeSpecName: "config-volume") pod "0469dd08-503b-4d55-9834-6dbfb58cae71" (UID: "0469dd08-503b-4d55-9834-6dbfb58cae71"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:45:03 crc kubenswrapper[4886]: I0219 21:45:03.934352 4886 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0469dd08-503b-4d55-9834-6dbfb58cae71-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 21:45:03 crc kubenswrapper[4886]: I0219 21:45:03.938779 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0469dd08-503b-4d55-9834-6dbfb58cae71-kube-api-access-swkdt" (OuterVolumeSpecName: "kube-api-access-swkdt") pod "0469dd08-503b-4d55-9834-6dbfb58cae71" (UID: "0469dd08-503b-4d55-9834-6dbfb58cae71"). InnerVolumeSpecName "kube-api-access-swkdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:45:03 crc kubenswrapper[4886]: I0219 21:45:03.940426 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0469dd08-503b-4d55-9834-6dbfb58cae71-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0469dd08-503b-4d55-9834-6dbfb58cae71" (UID: "0469dd08-503b-4d55-9834-6dbfb58cae71"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:45:04 crc kubenswrapper[4886]: I0219 21:45:04.036734 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swkdt\" (UniqueName: \"kubernetes.io/projected/0469dd08-503b-4d55-9834-6dbfb58cae71-kube-api-access-swkdt\") on node \"crc\" DevicePath \"\"" Feb 19 21:45:04 crc kubenswrapper[4886]: I0219 21:45:04.036770 4886 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0469dd08-503b-4d55-9834-6dbfb58cae71-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 21:45:04 crc kubenswrapper[4886]: I0219 21:45:04.383219 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525625-fdxr5" event={"ID":"0469dd08-503b-4d55-9834-6dbfb58cae71","Type":"ContainerDied","Data":"4eaadfbfe20803c23d554777726c65a5615d69c9bccf31ce7e4e67ef76916af4"} Feb 19 21:45:04 crc kubenswrapper[4886]: I0219 21:45:04.383378 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4eaadfbfe20803c23d554777726c65a5615d69c9bccf31ce7e4e67ef76916af4" Feb 19 21:45:04 crc kubenswrapper[4886]: I0219 21:45:04.383382 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525625-fdxr5" Feb 19 21:45:04 crc kubenswrapper[4886]: I0219 21:45:04.407178 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525580-5pbww"] Feb 19 21:45:04 crc kubenswrapper[4886]: I0219 21:45:04.418142 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525580-5pbww"] Feb 19 21:45:04 crc kubenswrapper[4886]: I0219 21:45:04.619454 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="202fcc8c-4e14-4336-aaef-22f33ff09ece" path="/var/lib/kubelet/pods/202fcc8c-4e14-4336-aaef-22f33ff09ece/volumes" Feb 19 21:45:55 crc kubenswrapper[4886]: I0219 21:45:55.016953 4886 scope.go:117] "RemoveContainer" containerID="f2f0373543ca7cabe0012cb73b9e89e2ae85e1ff70a900656a93a9716b233cae" Feb 19 21:46:42 crc kubenswrapper[4886]: I0219 21:46:42.613363 4886 generic.go:334] "Generic (PLEG): container finished" podID="8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf" containerID="5a252e5e1f8d21a741cf730cb409c84d53febe839e725990f226a06d320543a7" exitCode=0 Feb 19 21:46:42 crc kubenswrapper[4886]: I0219 21:46:42.631688 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" event={"ID":"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf","Type":"ContainerDied","Data":"5a252e5e1f8d21a741cf730cb409c84d53febe839e725990f226a06d320543a7"} Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.233660 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.383502 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-ceilometer-compute-config-data-0\") pod \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.383637 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-ceilometer-compute-config-data-1\") pod \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.383738 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-ssh-key-openstack-edpm-ipam\") pod \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.383806 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-telemetry-combined-ca-bundle\") pod \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.383970 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-ceilometer-compute-config-data-2\") pod \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.384070 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbzmp\" (UniqueName: \"kubernetes.io/projected/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-kube-api-access-zbzmp\") pod \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.384144 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-inventory\") pod \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\" (UID: \"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf\") " Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.396605 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-kube-api-access-zbzmp" (OuterVolumeSpecName: "kube-api-access-zbzmp") pod "8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf" (UID: "8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf"). InnerVolumeSpecName "kube-api-access-zbzmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.402731 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf" (UID: "8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.420398 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf" (UID: "8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.422335 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf" (UID: "8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.435095 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-inventory" (OuterVolumeSpecName: "inventory") pod "8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf" (UID: "8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.451621 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf" (UID: "8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.463010 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf" (UID: "8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.487702 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbzmp\" (UniqueName: \"kubernetes.io/projected/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-kube-api-access-zbzmp\") on node \"crc\" DevicePath \"\"" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.487747 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.487767 4886 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.487789 4886 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.487814 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.487839 4886 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.487863 4886 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.640778 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" event={"ID":"8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf","Type":"ContainerDied","Data":"14bd8eab245eb86ec11991c247e80ce395cf3c78ca495bc5430c459133930227"} Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.640821 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14bd8eab245eb86ec11991c247e80ce395cf3c78ca495bc5430c459133930227" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.640837 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gqtgf" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.875901 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz"] Feb 19 21:46:44 crc kubenswrapper[4886]: E0219 21:46:44.876892 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.876923 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 21:46:44 crc kubenswrapper[4886]: E0219 21:46:44.876986 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0469dd08-503b-4d55-9834-6dbfb58cae71" containerName="collect-profiles" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.877002 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="0469dd08-503b-4d55-9834-6dbfb58cae71" containerName="collect-profiles" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.877435 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f5b2a4b-7bdc-434a-a90d-a3edb13e0faf" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.877474 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="0469dd08-503b-4d55-9834-6dbfb58cae71" containerName="collect-profiles" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.878747 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.881365 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.882857 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.882911 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.882929 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.883143 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vq4ls" Feb 19 21:46:44 crc kubenswrapper[4886]: I0219 21:46:44.893702 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz"] Feb 19 21:46:45 crc kubenswrapper[4886]: I0219 21:46:45.009274 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndrl5\" (UniqueName: \"kubernetes.io/projected/da891c3f-ac6c-41b3-8c91-2ca7750ec989-kube-api-access-ndrl5\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" Feb 19 21:46:45 crc kubenswrapper[4886]: I0219 21:46:45.009590 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" Feb 19 21:46:45 crc kubenswrapper[4886]: I0219 21:46:45.009722 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" Feb 19 21:46:45 crc kubenswrapper[4886]: I0219 21:46:45.009833 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" Feb 19 21:46:45 crc kubenswrapper[4886]: I0219 21:46:45.009948 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" Feb 19 21:46:45 crc kubenswrapper[4886]: I0219 21:46:45.010586 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" Feb 19 21:46:45 crc kubenswrapper[4886]: I0219 21:46:45.010692 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" Feb 19 21:46:45 crc kubenswrapper[4886]: I0219 21:46:45.112367 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndrl5\" (UniqueName: \"kubernetes.io/projected/da891c3f-ac6c-41b3-8c91-2ca7750ec989-kube-api-access-ndrl5\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" Feb 19 21:46:45 crc kubenswrapper[4886]: I0219 21:46:45.112461 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" Feb 19 21:46:45 crc kubenswrapper[4886]: I0219 21:46:45.112547 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" Feb 19 21:46:45 crc kubenswrapper[4886]: I0219 21:46:45.112591 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" Feb 19 21:46:45 crc kubenswrapper[4886]: I0219 21:46:45.112632 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" Feb 19 21:46:45 crc kubenswrapper[4886]: I0219 21:46:45.112739 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" Feb 19 21:46:45 crc kubenswrapper[4886]: I0219 21:46:45.112779 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" Feb 19 21:46:45 crc kubenswrapper[4886]: I0219 21:46:45.117868 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" Feb 19 21:46:45 crc kubenswrapper[4886]: I0219 21:46:45.117866 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" Feb 19 21:46:45 crc kubenswrapper[4886]: I0219 21:46:45.118856 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" Feb 19 21:46:45 crc kubenswrapper[4886]: I0219 21:46:45.119029 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" Feb 19 21:46:45 crc kubenswrapper[4886]: I0219 21:46:45.119030 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" Feb 19 21:46:45 crc kubenswrapper[4886]: I0219 21:46:45.126755 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" Feb 19 21:46:45 crc kubenswrapper[4886]: I0219 21:46:45.129550 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndrl5\" (UniqueName: \"kubernetes.io/projected/da891c3f-ac6c-41b3-8c91-2ca7750ec989-kube-api-access-ndrl5\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" Feb 19 21:46:45 crc kubenswrapper[4886]: I0219 21:46:45.204215 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" Feb 19 21:46:45 crc kubenswrapper[4886]: I0219 21:46:45.754212 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz"] Feb 19 21:46:45 crc kubenswrapper[4886]: W0219 21:46:45.759248 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda891c3f_ac6c_41b3_8c91_2ca7750ec989.slice/crio-b11a24373d29455e85b31cb4d2798f68b7ca0b080593a26b3b440846c2c3c97f WatchSource:0}: Error finding container b11a24373d29455e85b31cb4d2798f68b7ca0b080593a26b3b440846c2c3c97f: Status 404 returned error can't find the container with id b11a24373d29455e85b31cb4d2798f68b7ca0b080593a26b3b440846c2c3c97f Feb 19 21:46:46 crc kubenswrapper[4886]: I0219 21:46:46.664125 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" event={"ID":"da891c3f-ac6c-41b3-8c91-2ca7750ec989","Type":"ContainerStarted","Data":"673e3e672d29f758e06a8bee084a7b13988f8bd4190beefcedcfccd41a6cdd1f"} Feb 19 21:46:46 crc kubenswrapper[4886]: I0219 21:46:46.664527 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" event={"ID":"da891c3f-ac6c-41b3-8c91-2ca7750ec989","Type":"ContainerStarted","Data":"b11a24373d29455e85b31cb4d2798f68b7ca0b080593a26b3b440846c2c3c97f"} Feb 19 21:46:46 crc kubenswrapper[4886]: I0219 21:46:46.698996 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" podStartSLOduration=2.2851354280000002 podStartE2EDuration="2.698977723s" podCreationTimestamp="2026-02-19 21:46:44 +0000 UTC" firstStartedPulling="2026-02-19 21:46:45.761649097 +0000 UTC m=+2836.389492147" lastFinishedPulling="2026-02-19 21:46:46.175491392 +0000 UTC m=+2836.803334442" observedRunningTime="2026-02-19 21:46:46.688950776 +0000 UTC m=+2837.316793826" watchObservedRunningTime="2026-02-19 21:46:46.698977723 +0000 UTC m=+2837.326820773" Feb 19 21:46:48 crc kubenswrapper[4886]: I0219 21:46:48.325254 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:46:48 crc kubenswrapper[4886]: I0219 21:46:48.325856 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:47:18 crc kubenswrapper[4886]: I0219 21:47:18.325137 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:47:18 crc kubenswrapper[4886]: I0219 21:47:18.325646 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:47:48 crc kubenswrapper[4886]: I0219 21:47:48.324561 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:47:48 crc kubenswrapper[4886]: I0219 21:47:48.325113 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:47:48 crc kubenswrapper[4886]: I0219 21:47:48.325174 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 21:47:48 crc kubenswrapper[4886]: I0219 21:47:48.326023 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed"} pod="openshift-machine-config-operator/machine-config-daemon-6stm5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 21:47:48 crc kubenswrapper[4886]: I0219 21:47:48.326077 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" containerID="cri-o://5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" gracePeriod=600 Feb 19 21:47:48 crc kubenswrapper[4886]: E0219 21:47:48.464132 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:47:49 crc kubenswrapper[4886]: I0219 21:47:49.413628 4886 generic.go:334] "Generic (PLEG): container finished" podID="b096c32d-4192-4529-bc55-b05d09004007" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" exitCode=0 Feb 19 21:47:49 crc kubenswrapper[4886]: I0219 21:47:49.413703 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerDied","Data":"5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed"} Feb 19 21:47:49 crc kubenswrapper[4886]: I0219 21:47:49.414133 4886 scope.go:117] "RemoveContainer" containerID="7a16c1e2937e3aad302477d79d5b66b3962f3379994a86f24c23094445ae2c37" Feb 19 21:47:49 crc kubenswrapper[4886]: I0219 21:47:49.414893 4886 scope.go:117] "RemoveContainer" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" Feb 19 21:47:49 crc kubenswrapper[4886]: E0219 21:47:49.415233 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:48:04 crc kubenswrapper[4886]: I0219 21:48:04.602938 4886 scope.go:117] "RemoveContainer" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" Feb 19 21:48:04 crc kubenswrapper[4886]: E0219 21:48:04.604550 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:48:18 crc kubenswrapper[4886]: I0219 21:48:18.601110 4886 scope.go:117] "RemoveContainer" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" Feb 19 21:48:18 crc kubenswrapper[4886]: E0219 21:48:18.603606 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:48:32 crc kubenswrapper[4886]: I0219 21:48:32.601728 4886 scope.go:117] "RemoveContainer" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" Feb 19 21:48:32 crc kubenswrapper[4886]: E0219 21:48:32.602566 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:48:43 crc kubenswrapper[4886]: I0219 21:48:43.084945 4886 generic.go:334] "Generic (PLEG): container finished" podID="da891c3f-ac6c-41b3-8c91-2ca7750ec989" containerID="673e3e672d29f758e06a8bee084a7b13988f8bd4190beefcedcfccd41a6cdd1f" exitCode=0 Feb 19 21:48:43 crc kubenswrapper[4886]: I0219 21:48:43.085042 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" event={"ID":"da891c3f-ac6c-41b3-8c91-2ca7750ec989","Type":"ContainerDied","Data":"673e3e672d29f758e06a8bee084a7b13988f8bd4190beefcedcfccd41a6cdd1f"} Feb 19 21:48:44 crc kubenswrapper[4886]: I0219 21:48:44.583984 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" Feb 19 21:48:44 crc kubenswrapper[4886]: I0219 21:48:44.677357 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-inventory\") pod \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " Feb 19 21:48:44 crc kubenswrapper[4886]: I0219 21:48:44.677457 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-ssh-key-openstack-edpm-ipam\") pod \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " Feb 19 21:48:44 crc kubenswrapper[4886]: I0219 21:48:44.677500 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-ceilometer-ipmi-config-data-0\") pod \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " Feb 19 21:48:44 crc kubenswrapper[4886]: I0219 21:48:44.677802 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-ceilometer-ipmi-config-data-2\") pod \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " Feb 19 21:48:44 crc kubenswrapper[4886]: I0219 21:48:44.677922 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-telemetry-power-monitoring-combined-ca-bundle\") pod \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " Feb 19 21:48:44 crc kubenswrapper[4886]: I0219 21:48:44.677968 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-ceilometer-ipmi-config-data-1\") pod \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " Feb 19 21:48:44 crc kubenswrapper[4886]: I0219 21:48:44.678034 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndrl5\" (UniqueName: \"kubernetes.io/projected/da891c3f-ac6c-41b3-8c91-2ca7750ec989-kube-api-access-ndrl5\") pod \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\" (UID: \"da891c3f-ac6c-41b3-8c91-2ca7750ec989\") " Feb 19 21:48:44 crc kubenswrapper[4886]: I0219 21:48:44.683068 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "da891c3f-ac6c-41b3-8c91-2ca7750ec989" (UID: "da891c3f-ac6c-41b3-8c91-2ca7750ec989"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:48:44 crc kubenswrapper[4886]: I0219 21:48:44.683488 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da891c3f-ac6c-41b3-8c91-2ca7750ec989-kube-api-access-ndrl5" (OuterVolumeSpecName: "kube-api-access-ndrl5") pod "da891c3f-ac6c-41b3-8c91-2ca7750ec989" (UID: "da891c3f-ac6c-41b3-8c91-2ca7750ec989"). InnerVolumeSpecName "kube-api-access-ndrl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:48:44 crc kubenswrapper[4886]: I0219 21:48:44.715897 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "da891c3f-ac6c-41b3-8c91-2ca7750ec989" (UID: "da891c3f-ac6c-41b3-8c91-2ca7750ec989"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:48:44 crc kubenswrapper[4886]: I0219 21:48:44.716689 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "da891c3f-ac6c-41b3-8c91-2ca7750ec989" (UID: "da891c3f-ac6c-41b3-8c91-2ca7750ec989"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:48:44 crc kubenswrapper[4886]: I0219 21:48:44.717315 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "da891c3f-ac6c-41b3-8c91-2ca7750ec989" (UID: "da891c3f-ac6c-41b3-8c91-2ca7750ec989"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:48:44 crc kubenswrapper[4886]: I0219 21:48:44.719205 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "da891c3f-ac6c-41b3-8c91-2ca7750ec989" (UID: "da891c3f-ac6c-41b3-8c91-2ca7750ec989"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:48:44 crc kubenswrapper[4886]: I0219 21:48:44.721475 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-inventory" (OuterVolumeSpecName: "inventory") pod "da891c3f-ac6c-41b3-8c91-2ca7750ec989" (UID: "da891c3f-ac6c-41b3-8c91-2ca7750ec989"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:48:44 crc kubenswrapper[4886]: I0219 21:48:44.781617 4886 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 19 21:48:44 crc kubenswrapper[4886]: I0219 21:48:44.781647 4886 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:48:44 crc kubenswrapper[4886]: I0219 21:48:44.781657 4886 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 19 21:48:44 crc kubenswrapper[4886]: I0219 21:48:44.781670 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndrl5\" (UniqueName: \"kubernetes.io/projected/da891c3f-ac6c-41b3-8c91-2ca7750ec989-kube-api-access-ndrl5\") on node \"crc\" DevicePath \"\"" Feb 19 21:48:44 crc kubenswrapper[4886]: I0219 21:48:44.781680 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 21:48:44 crc kubenswrapper[4886]: I0219 21:48:44.781688 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 21:48:44 crc kubenswrapper[4886]: I0219 21:48:44.781696 4886 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/da891c3f-ac6c-41b3-8c91-2ca7750ec989-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.112355 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" event={"ID":"da891c3f-ac6c-41b3-8c91-2ca7750ec989","Type":"ContainerDied","Data":"b11a24373d29455e85b31cb4d2798f68b7ca0b080593a26b3b440846c2c3c97f"} Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.112434 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b11a24373d29455e85b31cb4d2798f68b7ca0b080593a26b3b440846c2c3c97f" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.112397 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2pvwz" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.218288 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d"] Feb 19 21:48:45 crc kubenswrapper[4886]: E0219 21:48:45.218883 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da891c3f-ac6c-41b3-8c91-2ca7750ec989" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.218909 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="da891c3f-ac6c-41b3-8c91-2ca7750ec989" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.219288 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="da891c3f-ac6c-41b3-8c91-2ca7750ec989" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.220324 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.222645 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.223379 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vq4ls" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.223665 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.223859 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.227930 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.238883 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d"] Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.295091 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d5e539de-564f-4ae3-8102-cefe1520a102-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hfz8d\" (UID: \"d5e539de-564f-4ae3-8102-cefe1520a102\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.295146 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6shj\" (UniqueName: \"kubernetes.io/projected/d5e539de-564f-4ae3-8102-cefe1520a102-kube-api-access-f6shj\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hfz8d\" (UID: \"d5e539de-564f-4ae3-8102-cefe1520a102\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.295186 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d5e539de-564f-4ae3-8102-cefe1520a102-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hfz8d\" (UID: \"d5e539de-564f-4ae3-8102-cefe1520a102\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.295493 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d5e539de-564f-4ae3-8102-cefe1520a102-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hfz8d\" (UID: \"d5e539de-564f-4ae3-8102-cefe1520a102\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.295763 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5e539de-564f-4ae3-8102-cefe1520a102-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hfz8d\" (UID: \"d5e539de-564f-4ae3-8102-cefe1520a102\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.398044 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5e539de-564f-4ae3-8102-cefe1520a102-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hfz8d\" (UID: \"d5e539de-564f-4ae3-8102-cefe1520a102\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.398235 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6shj\" (UniqueName: \"kubernetes.io/projected/d5e539de-564f-4ae3-8102-cefe1520a102-kube-api-access-f6shj\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hfz8d\" (UID: \"d5e539de-564f-4ae3-8102-cefe1520a102\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.398287 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d5e539de-564f-4ae3-8102-cefe1520a102-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hfz8d\" (UID: \"d5e539de-564f-4ae3-8102-cefe1520a102\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.398321 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d5e539de-564f-4ae3-8102-cefe1520a102-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hfz8d\" (UID: \"d5e539de-564f-4ae3-8102-cefe1520a102\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.398431 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d5e539de-564f-4ae3-8102-cefe1520a102-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hfz8d\" (UID: \"d5e539de-564f-4ae3-8102-cefe1520a102\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.402240 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5e539de-564f-4ae3-8102-cefe1520a102-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hfz8d\" (UID: \"d5e539de-564f-4ae3-8102-cefe1520a102\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.402482 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d5e539de-564f-4ae3-8102-cefe1520a102-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hfz8d\" (UID: \"d5e539de-564f-4ae3-8102-cefe1520a102\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.402571 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d5e539de-564f-4ae3-8102-cefe1520a102-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hfz8d\" (UID: \"d5e539de-564f-4ae3-8102-cefe1520a102\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.410872 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d5e539de-564f-4ae3-8102-cefe1520a102-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hfz8d\" (UID: \"d5e539de-564f-4ae3-8102-cefe1520a102\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.417885 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6shj\" (UniqueName: \"kubernetes.io/projected/d5e539de-564f-4ae3-8102-cefe1520a102-kube-api-access-f6shj\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hfz8d\" (UID: \"d5e539de-564f-4ae3-8102-cefe1520a102\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d" Feb 19 21:48:45 crc kubenswrapper[4886]: I0219 21:48:45.540092 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d" Feb 19 21:48:46 crc kubenswrapper[4886]: I0219 21:48:46.099617 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d"] Feb 19 21:48:46 crc kubenswrapper[4886]: I0219 21:48:46.103823 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 21:48:46 crc kubenswrapper[4886]: I0219 21:48:46.126500 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d" event={"ID":"d5e539de-564f-4ae3-8102-cefe1520a102","Type":"ContainerStarted","Data":"c6340f2cd8502921315c7d403421dab181eefd650108b30589563403aba870dc"} Feb 19 21:48:47 crc kubenswrapper[4886]: I0219 21:48:47.149459 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d" event={"ID":"d5e539de-564f-4ae3-8102-cefe1520a102","Type":"ContainerStarted","Data":"b034a908d63f1e57be0f5ba13412643784c90e9de0e70fdadb5d95288a272da5"} Feb 19 21:48:47 crc kubenswrapper[4886]: I0219 21:48:47.200252 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d" podStartSLOduration=1.797939853 podStartE2EDuration="2.200227514s" podCreationTimestamp="2026-02-19 21:48:45 +0000 UTC" firstStartedPulling="2026-02-19 21:48:46.103622294 +0000 UTC m=+2956.731465344" lastFinishedPulling="2026-02-19 21:48:46.505909955 +0000 UTC m=+2957.133753005" observedRunningTime="2026-02-19 21:48:47.174503409 +0000 UTC m=+2957.802346479" watchObservedRunningTime="2026-02-19 21:48:47.200227514 +0000 UTC m=+2957.828070564" Feb 19 21:48:47 crc kubenswrapper[4886]: I0219 21:48:47.601766 4886 scope.go:117] "RemoveContainer" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" Feb 19 21:48:47 crc kubenswrapper[4886]: E0219 21:48:47.602081 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:48:59 crc kubenswrapper[4886]: I0219 21:48:59.601600 4886 scope.go:117] "RemoveContainer" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" Feb 19 21:48:59 crc kubenswrapper[4886]: E0219 21:48:59.602700 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:49:02 crc kubenswrapper[4886]: I0219 21:49:02.315314 4886 generic.go:334] "Generic (PLEG): container finished" podID="d5e539de-564f-4ae3-8102-cefe1520a102" containerID="b034a908d63f1e57be0f5ba13412643784c90e9de0e70fdadb5d95288a272da5" exitCode=0 Feb 19 21:49:02 crc kubenswrapper[4886]: I0219 21:49:02.315378 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d" event={"ID":"d5e539de-564f-4ae3-8102-cefe1520a102","Type":"ContainerDied","Data":"b034a908d63f1e57be0f5ba13412643784c90e9de0e70fdadb5d95288a272da5"} Feb 19 21:49:03 crc kubenswrapper[4886]: I0219 21:49:03.881651 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d" Feb 19 21:49:04 crc kubenswrapper[4886]: I0219 21:49:04.045045 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5e539de-564f-4ae3-8102-cefe1520a102-inventory\") pod \"d5e539de-564f-4ae3-8102-cefe1520a102\" (UID: \"d5e539de-564f-4ae3-8102-cefe1520a102\") " Feb 19 21:49:04 crc kubenswrapper[4886]: I0219 21:49:04.045127 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d5e539de-564f-4ae3-8102-cefe1520a102-logging-compute-config-data-1\") pod \"d5e539de-564f-4ae3-8102-cefe1520a102\" (UID: \"d5e539de-564f-4ae3-8102-cefe1520a102\") " Feb 19 21:49:04 crc kubenswrapper[4886]: I0219 21:49:04.045211 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d5e539de-564f-4ae3-8102-cefe1520a102-ssh-key-openstack-edpm-ipam\") pod \"d5e539de-564f-4ae3-8102-cefe1520a102\" (UID: \"d5e539de-564f-4ae3-8102-cefe1520a102\") " Feb 19 21:49:04 crc kubenswrapper[4886]: I0219 21:49:04.045444 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6shj\" (UniqueName: \"kubernetes.io/projected/d5e539de-564f-4ae3-8102-cefe1520a102-kube-api-access-f6shj\") pod \"d5e539de-564f-4ae3-8102-cefe1520a102\" (UID: \"d5e539de-564f-4ae3-8102-cefe1520a102\") " Feb 19 21:49:04 crc kubenswrapper[4886]: I0219 21:49:04.045595 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d5e539de-564f-4ae3-8102-cefe1520a102-logging-compute-config-data-0\") pod \"d5e539de-564f-4ae3-8102-cefe1520a102\" (UID: \"d5e539de-564f-4ae3-8102-cefe1520a102\") " Feb 19 21:49:04 crc kubenswrapper[4886]: I0219 21:49:04.064501 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5e539de-564f-4ae3-8102-cefe1520a102-kube-api-access-f6shj" (OuterVolumeSpecName: "kube-api-access-f6shj") pod "d5e539de-564f-4ae3-8102-cefe1520a102" (UID: "d5e539de-564f-4ae3-8102-cefe1520a102"). InnerVolumeSpecName "kube-api-access-f6shj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:49:04 crc kubenswrapper[4886]: I0219 21:49:04.105825 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5e539de-564f-4ae3-8102-cefe1520a102-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d5e539de-564f-4ae3-8102-cefe1520a102" (UID: "d5e539de-564f-4ae3-8102-cefe1520a102"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:49:04 crc kubenswrapper[4886]: I0219 21:49:04.111244 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5e539de-564f-4ae3-8102-cefe1520a102-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "d5e539de-564f-4ae3-8102-cefe1520a102" (UID: "d5e539de-564f-4ae3-8102-cefe1520a102"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:49:04 crc kubenswrapper[4886]: I0219 21:49:04.115564 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5e539de-564f-4ae3-8102-cefe1520a102-inventory" (OuterVolumeSpecName: "inventory") pod "d5e539de-564f-4ae3-8102-cefe1520a102" (UID: "d5e539de-564f-4ae3-8102-cefe1520a102"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:49:04 crc kubenswrapper[4886]: I0219 21:49:04.123728 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5e539de-564f-4ae3-8102-cefe1520a102-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "d5e539de-564f-4ae3-8102-cefe1520a102" (UID: "d5e539de-564f-4ae3-8102-cefe1520a102"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:49:04 crc kubenswrapper[4886]: I0219 21:49:04.148471 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6shj\" (UniqueName: \"kubernetes.io/projected/d5e539de-564f-4ae3-8102-cefe1520a102-kube-api-access-f6shj\") on node \"crc\" DevicePath \"\"" Feb 19 21:49:04 crc kubenswrapper[4886]: I0219 21:49:04.148502 4886 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d5e539de-564f-4ae3-8102-cefe1520a102-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 19 21:49:04 crc kubenswrapper[4886]: I0219 21:49:04.148514 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5e539de-564f-4ae3-8102-cefe1520a102-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 21:49:04 crc kubenswrapper[4886]: I0219 21:49:04.148527 4886 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d5e539de-564f-4ae3-8102-cefe1520a102-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 19 21:49:04 crc kubenswrapper[4886]: I0219 21:49:04.148538 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d5e539de-564f-4ae3-8102-cefe1520a102-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 21:49:04 crc kubenswrapper[4886]: I0219 21:49:04.337691 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d" event={"ID":"d5e539de-564f-4ae3-8102-cefe1520a102","Type":"ContainerDied","Data":"c6340f2cd8502921315c7d403421dab181eefd650108b30589563403aba870dc"} Feb 19 21:49:04 crc kubenswrapper[4886]: I0219 21:49:04.337982 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6340f2cd8502921315c7d403421dab181eefd650108b30589563403aba870dc" Feb 19 21:49:04 crc kubenswrapper[4886]: I0219 21:49:04.337820 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hfz8d" Feb 19 21:49:14 crc kubenswrapper[4886]: I0219 21:49:14.601710 4886 scope.go:117] "RemoveContainer" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" Feb 19 21:49:14 crc kubenswrapper[4886]: E0219 21:49:14.602786 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:49:25 crc kubenswrapper[4886]: I0219 21:49:25.602077 4886 scope.go:117] "RemoveContainer" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" Feb 19 21:49:25 crc kubenswrapper[4886]: E0219 21:49:25.605435 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:49:40 crc kubenswrapper[4886]: I0219 21:49:40.605223 4886 scope.go:117] "RemoveContainer" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" Feb 19 21:49:40 crc kubenswrapper[4886]: E0219 21:49:40.606152 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:49:51 crc kubenswrapper[4886]: I0219 21:49:51.601549 4886 scope.go:117] "RemoveContainer" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" Feb 19 21:49:51 crc kubenswrapper[4886]: E0219 21:49:51.602866 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:50:04 crc kubenswrapper[4886]: I0219 21:50:04.601252 4886 scope.go:117] "RemoveContainer" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" Feb 19 21:50:04 crc kubenswrapper[4886]: E0219 21:50:04.602285 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:50:18 crc kubenswrapper[4886]: I0219 21:50:18.603855 4886 scope.go:117] "RemoveContainer" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" Feb 19 21:50:18 crc kubenswrapper[4886]: E0219 21:50:18.605039 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:50:32 crc kubenswrapper[4886]: I0219 21:50:32.600977 4886 scope.go:117] "RemoveContainer" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" Feb 19 21:50:32 crc kubenswrapper[4886]: E0219 21:50:32.601832 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:50:45 crc kubenswrapper[4886]: I0219 21:50:45.604473 4886 scope.go:117] "RemoveContainer" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" Feb 19 21:50:45 crc kubenswrapper[4886]: E0219 21:50:45.605626 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:50:58 crc kubenswrapper[4886]: I0219 21:50:58.601699 4886 scope.go:117] "RemoveContainer" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" Feb 19 21:50:58 crc kubenswrapper[4886]: E0219 21:50:58.602593 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:51:11 crc kubenswrapper[4886]: I0219 21:51:11.601640 4886 scope.go:117] "RemoveContainer" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" Feb 19 21:51:11 crc kubenswrapper[4886]: E0219 21:51:11.604668 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:51:23 crc kubenswrapper[4886]: I0219 21:51:23.601538 4886 scope.go:117] "RemoveContainer" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" Feb 19 21:51:23 crc kubenswrapper[4886]: E0219 21:51:23.602754 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:51:31 crc kubenswrapper[4886]: I0219 21:51:31.038736 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nzzqm"] Feb 19 21:51:31 crc kubenswrapper[4886]: E0219 21:51:31.039742 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5e539de-564f-4ae3-8102-cefe1520a102" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 19 21:51:31 crc kubenswrapper[4886]: I0219 21:51:31.039756 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5e539de-564f-4ae3-8102-cefe1520a102" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 19 21:51:31 crc kubenswrapper[4886]: I0219 21:51:31.040004 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5e539de-564f-4ae3-8102-cefe1520a102" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 19 21:51:31 crc kubenswrapper[4886]: I0219 21:51:31.041699 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nzzqm" Feb 19 21:51:31 crc kubenswrapper[4886]: I0219 21:51:31.056699 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nzzqm"] Feb 19 21:51:31 crc kubenswrapper[4886]: I0219 21:51:31.099578 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fa842a8-de52-42ff-92aa-647120396456-utilities\") pod \"community-operators-nzzqm\" (UID: \"2fa842a8-de52-42ff-92aa-647120396456\") " pod="openshift-marketplace/community-operators-nzzqm" Feb 19 21:51:31 crc kubenswrapper[4886]: I0219 21:51:31.099674 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fa842a8-de52-42ff-92aa-647120396456-catalog-content\") pod \"community-operators-nzzqm\" (UID: \"2fa842a8-de52-42ff-92aa-647120396456\") " pod="openshift-marketplace/community-operators-nzzqm" Feb 19 21:51:31 crc kubenswrapper[4886]: I0219 21:51:31.099721 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4msn\" (UniqueName: \"kubernetes.io/projected/2fa842a8-de52-42ff-92aa-647120396456-kube-api-access-b4msn\") pod \"community-operators-nzzqm\" (UID: \"2fa842a8-de52-42ff-92aa-647120396456\") " pod="openshift-marketplace/community-operators-nzzqm" Feb 19 21:51:31 crc kubenswrapper[4886]: I0219 21:51:31.201713 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4msn\" (UniqueName: \"kubernetes.io/projected/2fa842a8-de52-42ff-92aa-647120396456-kube-api-access-b4msn\") pod \"community-operators-nzzqm\" (UID: \"2fa842a8-de52-42ff-92aa-647120396456\") " pod="openshift-marketplace/community-operators-nzzqm" Feb 19 21:51:31 crc kubenswrapper[4886]: I0219 21:51:31.202002 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fa842a8-de52-42ff-92aa-647120396456-utilities\") pod \"community-operators-nzzqm\" (UID: \"2fa842a8-de52-42ff-92aa-647120396456\") " pod="openshift-marketplace/community-operators-nzzqm" Feb 19 21:51:31 crc kubenswrapper[4886]: I0219 21:51:31.202088 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fa842a8-de52-42ff-92aa-647120396456-catalog-content\") pod \"community-operators-nzzqm\" (UID: \"2fa842a8-de52-42ff-92aa-647120396456\") " pod="openshift-marketplace/community-operators-nzzqm" Feb 19 21:51:31 crc kubenswrapper[4886]: I0219 21:51:31.202505 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fa842a8-de52-42ff-92aa-647120396456-utilities\") pod \"community-operators-nzzqm\" (UID: \"2fa842a8-de52-42ff-92aa-647120396456\") " pod="openshift-marketplace/community-operators-nzzqm" Feb 19 21:51:31 crc kubenswrapper[4886]: I0219 21:51:31.203312 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fa842a8-de52-42ff-92aa-647120396456-catalog-content\") pod \"community-operators-nzzqm\" (UID: \"2fa842a8-de52-42ff-92aa-647120396456\") " pod="openshift-marketplace/community-operators-nzzqm" Feb 19 21:51:31 crc kubenswrapper[4886]: I0219 21:51:31.232788 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4msn\" (UniqueName: \"kubernetes.io/projected/2fa842a8-de52-42ff-92aa-647120396456-kube-api-access-b4msn\") pod \"community-operators-nzzqm\" (UID: \"2fa842a8-de52-42ff-92aa-647120396456\") " pod="openshift-marketplace/community-operators-nzzqm" Feb 19 21:51:31 crc kubenswrapper[4886]: I0219 21:51:31.377342 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nzzqm" Feb 19 21:51:31 crc kubenswrapper[4886]: I0219 21:51:31.887528 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nzzqm"] Feb 19 21:51:32 crc kubenswrapper[4886]: I0219 21:51:32.281135 4886 generic.go:334] "Generic (PLEG): container finished" podID="2fa842a8-de52-42ff-92aa-647120396456" containerID="8b238ea430b812f8d847b6f19e89863c7a7665f69e66b703dcbcd3b552decef4" exitCode=0 Feb 19 21:51:32 crc kubenswrapper[4886]: I0219 21:51:32.281247 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzzqm" event={"ID":"2fa842a8-de52-42ff-92aa-647120396456","Type":"ContainerDied","Data":"8b238ea430b812f8d847b6f19e89863c7a7665f69e66b703dcbcd3b552decef4"} Feb 19 21:51:32 crc kubenswrapper[4886]: I0219 21:51:32.281477 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzzqm" event={"ID":"2fa842a8-de52-42ff-92aa-647120396456","Type":"ContainerStarted","Data":"dcf7a3776097a9601978fb2b39f1ebcf89e89a7eb56adc8924b030443c91488f"} Feb 19 21:51:32 crc kubenswrapper[4886]: I0219 21:51:32.438108 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vdk6r"] Feb 19 21:51:32 crc kubenswrapper[4886]: I0219 21:51:32.440772 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vdk6r" Feb 19 21:51:32 crc kubenswrapper[4886]: I0219 21:51:32.491890 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vdk6r"] Feb 19 21:51:32 crc kubenswrapper[4886]: I0219 21:51:32.532443 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8405924e-452f-490b-be92-a6e7cc928f4d-catalog-content\") pod \"redhat-marketplace-vdk6r\" (UID: \"8405924e-452f-490b-be92-a6e7cc928f4d\") " pod="openshift-marketplace/redhat-marketplace-vdk6r" Feb 19 21:51:32 crc kubenswrapper[4886]: I0219 21:51:32.532659 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z9w9\" (UniqueName: \"kubernetes.io/projected/8405924e-452f-490b-be92-a6e7cc928f4d-kube-api-access-2z9w9\") pod \"redhat-marketplace-vdk6r\" (UID: \"8405924e-452f-490b-be92-a6e7cc928f4d\") " pod="openshift-marketplace/redhat-marketplace-vdk6r" Feb 19 21:51:32 crc kubenswrapper[4886]: I0219 21:51:32.532754 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8405924e-452f-490b-be92-a6e7cc928f4d-utilities\") pod \"redhat-marketplace-vdk6r\" (UID: \"8405924e-452f-490b-be92-a6e7cc928f4d\") " pod="openshift-marketplace/redhat-marketplace-vdk6r" Feb 19 21:51:32 crc kubenswrapper[4886]: I0219 21:51:32.636420 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8405924e-452f-490b-be92-a6e7cc928f4d-utilities\") pod \"redhat-marketplace-vdk6r\" (UID: \"8405924e-452f-490b-be92-a6e7cc928f4d\") " pod="openshift-marketplace/redhat-marketplace-vdk6r" Feb 19 21:51:32 crc kubenswrapper[4886]: I0219 21:51:32.637031 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8405924e-452f-490b-be92-a6e7cc928f4d-catalog-content\") pod \"redhat-marketplace-vdk6r\" (UID: \"8405924e-452f-490b-be92-a6e7cc928f4d\") " pod="openshift-marketplace/redhat-marketplace-vdk6r" Feb 19 21:51:32 crc kubenswrapper[4886]: I0219 21:51:32.637042 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8405924e-452f-490b-be92-a6e7cc928f4d-utilities\") pod \"redhat-marketplace-vdk6r\" (UID: \"8405924e-452f-490b-be92-a6e7cc928f4d\") " pod="openshift-marketplace/redhat-marketplace-vdk6r" Feb 19 21:51:32 crc kubenswrapper[4886]: I0219 21:51:32.637428 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z9w9\" (UniqueName: \"kubernetes.io/projected/8405924e-452f-490b-be92-a6e7cc928f4d-kube-api-access-2z9w9\") pod \"redhat-marketplace-vdk6r\" (UID: \"8405924e-452f-490b-be92-a6e7cc928f4d\") " pod="openshift-marketplace/redhat-marketplace-vdk6r" Feb 19 21:51:32 crc kubenswrapper[4886]: I0219 21:51:32.637452 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8405924e-452f-490b-be92-a6e7cc928f4d-catalog-content\") pod \"redhat-marketplace-vdk6r\" (UID: \"8405924e-452f-490b-be92-a6e7cc928f4d\") " pod="openshift-marketplace/redhat-marketplace-vdk6r" Feb 19 21:51:32 crc kubenswrapper[4886]: I0219 21:51:32.689936 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z9w9\" (UniqueName: \"kubernetes.io/projected/8405924e-452f-490b-be92-a6e7cc928f4d-kube-api-access-2z9w9\") pod \"redhat-marketplace-vdk6r\" (UID: \"8405924e-452f-490b-be92-a6e7cc928f4d\") " pod="openshift-marketplace/redhat-marketplace-vdk6r" Feb 19 21:51:32 crc kubenswrapper[4886]: I0219 21:51:32.777799 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vdk6r" Feb 19 21:51:33 crc kubenswrapper[4886]: I0219 21:51:33.239558 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vdk6r"] Feb 19 21:51:33 crc kubenswrapper[4886]: I0219 21:51:33.294852 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdk6r" event={"ID":"8405924e-452f-490b-be92-a6e7cc928f4d","Type":"ContainerStarted","Data":"ca78135f06ae3729af80381ae375d2b9752f7e37e28bd156fc15295ea813d91f"} Feb 19 21:51:33 crc kubenswrapper[4886]: I0219 21:51:33.431817 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2tlbj"] Feb 19 21:51:33 crc kubenswrapper[4886]: I0219 21:51:33.434234 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2tlbj" Feb 19 21:51:33 crc kubenswrapper[4886]: I0219 21:51:33.455763 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2tlbj"] Feb 19 21:51:33 crc kubenswrapper[4886]: I0219 21:51:33.564359 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ctk8\" (UniqueName: \"kubernetes.io/projected/10c2539a-86fc-42f4-9a7a-5ce4bc595589-kube-api-access-6ctk8\") pod \"certified-operators-2tlbj\" (UID: \"10c2539a-86fc-42f4-9a7a-5ce4bc595589\") " pod="openshift-marketplace/certified-operators-2tlbj" Feb 19 21:51:33 crc kubenswrapper[4886]: I0219 21:51:33.564711 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10c2539a-86fc-42f4-9a7a-5ce4bc595589-utilities\") pod \"certified-operators-2tlbj\" (UID: \"10c2539a-86fc-42f4-9a7a-5ce4bc595589\") " pod="openshift-marketplace/certified-operators-2tlbj" Feb 19 21:51:33 crc kubenswrapper[4886]: I0219 21:51:33.564867 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10c2539a-86fc-42f4-9a7a-5ce4bc595589-catalog-content\") pod \"certified-operators-2tlbj\" (UID: \"10c2539a-86fc-42f4-9a7a-5ce4bc595589\") " pod="openshift-marketplace/certified-operators-2tlbj" Feb 19 21:51:33 crc kubenswrapper[4886]: I0219 21:51:33.667178 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10c2539a-86fc-42f4-9a7a-5ce4bc595589-catalog-content\") pod \"certified-operators-2tlbj\" (UID: \"10c2539a-86fc-42f4-9a7a-5ce4bc595589\") " pod="openshift-marketplace/certified-operators-2tlbj" Feb 19 21:51:33 crc kubenswrapper[4886]: I0219 21:51:33.667629 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ctk8\" (UniqueName: \"kubernetes.io/projected/10c2539a-86fc-42f4-9a7a-5ce4bc595589-kube-api-access-6ctk8\") pod \"certified-operators-2tlbj\" (UID: \"10c2539a-86fc-42f4-9a7a-5ce4bc595589\") " pod="openshift-marketplace/certified-operators-2tlbj" Feb 19 21:51:33 crc kubenswrapper[4886]: I0219 21:51:33.667732 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10c2539a-86fc-42f4-9a7a-5ce4bc595589-utilities\") pod \"certified-operators-2tlbj\" (UID: \"10c2539a-86fc-42f4-9a7a-5ce4bc595589\") " pod="openshift-marketplace/certified-operators-2tlbj" Feb 19 21:51:33 crc kubenswrapper[4886]: I0219 21:51:33.668137 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10c2539a-86fc-42f4-9a7a-5ce4bc595589-catalog-content\") pod \"certified-operators-2tlbj\" (UID: \"10c2539a-86fc-42f4-9a7a-5ce4bc595589\") " pod="openshift-marketplace/certified-operators-2tlbj" Feb 19 21:51:33 crc kubenswrapper[4886]: I0219 21:51:33.668358 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10c2539a-86fc-42f4-9a7a-5ce4bc595589-utilities\") pod \"certified-operators-2tlbj\" (UID: \"10c2539a-86fc-42f4-9a7a-5ce4bc595589\") " pod="openshift-marketplace/certified-operators-2tlbj" Feb 19 21:51:33 crc kubenswrapper[4886]: I0219 21:51:33.689406 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ctk8\" (UniqueName: \"kubernetes.io/projected/10c2539a-86fc-42f4-9a7a-5ce4bc595589-kube-api-access-6ctk8\") pod \"certified-operators-2tlbj\" (UID: \"10c2539a-86fc-42f4-9a7a-5ce4bc595589\") " pod="openshift-marketplace/certified-operators-2tlbj" Feb 19 21:51:33 crc kubenswrapper[4886]: I0219 21:51:33.765240 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2tlbj" Feb 19 21:51:34 crc kubenswrapper[4886]: I0219 21:51:34.305903 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzzqm" event={"ID":"2fa842a8-de52-42ff-92aa-647120396456","Type":"ContainerStarted","Data":"9f8d534422286c96bbbbee5f24cac4c63be761c6bae9cec1cb3c437f3f38a839"} Feb 19 21:51:34 crc kubenswrapper[4886]: I0219 21:51:34.307540 4886 generic.go:334] "Generic (PLEG): container finished" podID="8405924e-452f-490b-be92-a6e7cc928f4d" containerID="707372db124573d46348062bd1085394c494606b40161b136c0a7566a2782868" exitCode=0 Feb 19 21:51:34 crc kubenswrapper[4886]: I0219 21:51:34.307582 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdk6r" event={"ID":"8405924e-452f-490b-be92-a6e7cc928f4d","Type":"ContainerDied","Data":"707372db124573d46348062bd1085394c494606b40161b136c0a7566a2782868"} Feb 19 21:51:34 crc kubenswrapper[4886]: I0219 21:51:34.318779 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2tlbj"] Feb 19 21:51:35 crc kubenswrapper[4886]: I0219 21:51:35.337280 4886 generic.go:334] "Generic (PLEG): container finished" podID="10c2539a-86fc-42f4-9a7a-5ce4bc595589" containerID="ddfa1309c16cbc22260de067465247fb9e55002bf474bc1797d2e605f3811197" exitCode=0 Feb 19 21:51:35 crc kubenswrapper[4886]: I0219 21:51:35.337592 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tlbj" event={"ID":"10c2539a-86fc-42f4-9a7a-5ce4bc595589","Type":"ContainerDied","Data":"ddfa1309c16cbc22260de067465247fb9e55002bf474bc1797d2e605f3811197"} Feb 19 21:51:35 crc kubenswrapper[4886]: I0219 21:51:35.338119 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tlbj" event={"ID":"10c2539a-86fc-42f4-9a7a-5ce4bc595589","Type":"ContainerStarted","Data":"f37b22209f977a2ef7a7d4a9dfdc3f71e7d155d31dcd46508a895d49e3c376ad"} Feb 19 21:51:35 crc kubenswrapper[4886]: I0219 21:51:35.350985 4886 generic.go:334] "Generic (PLEG): container finished" podID="2fa842a8-de52-42ff-92aa-647120396456" containerID="9f8d534422286c96bbbbee5f24cac4c63be761c6bae9cec1cb3c437f3f38a839" exitCode=0 Feb 19 21:51:35 crc kubenswrapper[4886]: I0219 21:51:35.351021 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzzqm" event={"ID":"2fa842a8-de52-42ff-92aa-647120396456","Type":"ContainerDied","Data":"9f8d534422286c96bbbbee5f24cac4c63be761c6bae9cec1cb3c437f3f38a839"} Feb 19 21:51:36 crc kubenswrapper[4886]: I0219 21:51:36.364600 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdk6r" event={"ID":"8405924e-452f-490b-be92-a6e7cc928f4d","Type":"ContainerStarted","Data":"5c0ad614638b6e395761c9cc73622c5d165da768befcc2a77233d21d89486bf3"} Feb 19 21:51:36 crc kubenswrapper[4886]: I0219 21:51:36.368929 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzzqm" event={"ID":"2fa842a8-de52-42ff-92aa-647120396456","Type":"ContainerStarted","Data":"31b08b4976b276a323eed43962fc38bfc05d0b11a2d61b9c541ef11fc854b9ae"} Feb 19 21:51:36 crc kubenswrapper[4886]: I0219 21:51:36.415628 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nzzqm" podStartSLOduration=1.67901467 podStartE2EDuration="5.415608176s" podCreationTimestamp="2026-02-19 21:51:31 +0000 UTC" firstStartedPulling="2026-02-19 21:51:32.28355416 +0000 UTC m=+3122.911397240" lastFinishedPulling="2026-02-19 21:51:36.020147656 +0000 UTC m=+3126.647990746" observedRunningTime="2026-02-19 21:51:36.40885969 +0000 UTC m=+3127.036702740" watchObservedRunningTime="2026-02-19 21:51:36.415608176 +0000 UTC m=+3127.043451226" Feb 19 21:51:37 crc kubenswrapper[4886]: I0219 21:51:37.387613 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tlbj" event={"ID":"10c2539a-86fc-42f4-9a7a-5ce4bc595589","Type":"ContainerStarted","Data":"c4eb97dca7accc6b8dd5ae2e6407b221a995967e2fadcde8f55fc8c481877543"} Feb 19 21:51:37 crc kubenswrapper[4886]: I0219 21:51:37.393893 4886 generic.go:334] "Generic (PLEG): container finished" podID="8405924e-452f-490b-be92-a6e7cc928f4d" containerID="5c0ad614638b6e395761c9cc73622c5d165da768befcc2a77233d21d89486bf3" exitCode=0 Feb 19 21:51:37 crc kubenswrapper[4886]: I0219 21:51:37.396206 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdk6r" event={"ID":"8405924e-452f-490b-be92-a6e7cc928f4d","Type":"ContainerDied","Data":"5c0ad614638b6e395761c9cc73622c5d165da768befcc2a77233d21d89486bf3"} Feb 19 21:51:37 crc kubenswrapper[4886]: I0219 21:51:37.602273 4886 scope.go:117] "RemoveContainer" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" Feb 19 21:51:37 crc kubenswrapper[4886]: E0219 21:51:37.602737 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:51:38 crc kubenswrapper[4886]: I0219 21:51:38.410396 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdk6r" event={"ID":"8405924e-452f-490b-be92-a6e7cc928f4d","Type":"ContainerStarted","Data":"d0e21034dd8790ed8fd800ed9d3f4b4c60c3d5847aef931fbfa6ef91396c7ca3"} Feb 19 21:51:38 crc kubenswrapper[4886]: I0219 21:51:38.431529 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vdk6r" podStartSLOduration=2.7494238380000002 podStartE2EDuration="6.431511618s" podCreationTimestamp="2026-02-19 21:51:32 +0000 UTC" firstStartedPulling="2026-02-19 21:51:34.310562235 +0000 UTC m=+3124.938405285" lastFinishedPulling="2026-02-19 21:51:37.992650005 +0000 UTC m=+3128.620493065" observedRunningTime="2026-02-19 21:51:38.426566486 +0000 UTC m=+3129.054409536" watchObservedRunningTime="2026-02-19 21:51:38.431511618 +0000 UTC m=+3129.059354668" Feb 19 21:51:39 crc kubenswrapper[4886]: I0219 21:51:39.427492 4886 generic.go:334] "Generic (PLEG): container finished" podID="10c2539a-86fc-42f4-9a7a-5ce4bc595589" containerID="c4eb97dca7accc6b8dd5ae2e6407b221a995967e2fadcde8f55fc8c481877543" exitCode=0 Feb 19 21:51:39 crc kubenswrapper[4886]: I0219 21:51:39.429045 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tlbj" event={"ID":"10c2539a-86fc-42f4-9a7a-5ce4bc595589","Type":"ContainerDied","Data":"c4eb97dca7accc6b8dd5ae2e6407b221a995967e2fadcde8f55fc8c481877543"} Feb 19 21:51:40 crc kubenswrapper[4886]: I0219 21:51:40.441299 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tlbj" event={"ID":"10c2539a-86fc-42f4-9a7a-5ce4bc595589","Type":"ContainerStarted","Data":"e890c7ca7e0068728b96dbe6feed9242bb1d90d8abc62cca918e65f1e850760b"} Feb 19 21:51:40 crc kubenswrapper[4886]: I0219 21:51:40.461768 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2tlbj" podStartSLOduration=2.9592636629999998 podStartE2EDuration="7.461742143s" podCreationTimestamp="2026-02-19 21:51:33 +0000 UTC" firstStartedPulling="2026-02-19 21:51:35.341377191 +0000 UTC m=+3125.969220241" lastFinishedPulling="2026-02-19 21:51:39.843855671 +0000 UTC m=+3130.471698721" observedRunningTime="2026-02-19 21:51:40.456885163 +0000 UTC m=+3131.084728223" watchObservedRunningTime="2026-02-19 21:51:40.461742143 +0000 UTC m=+3131.089585193" Feb 19 21:51:41 crc kubenswrapper[4886]: I0219 21:51:41.378177 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nzzqm" Feb 19 21:51:41 crc kubenswrapper[4886]: I0219 21:51:41.378346 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nzzqm" Feb 19 21:51:42 crc kubenswrapper[4886]: I0219 21:51:42.443078 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-nzzqm" podUID="2fa842a8-de52-42ff-92aa-647120396456" containerName="registry-server" probeResult="failure" output=< Feb 19 21:51:42 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 21:51:42 crc kubenswrapper[4886]: > Feb 19 21:51:42 crc kubenswrapper[4886]: I0219 21:51:42.778403 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vdk6r" Feb 19 21:51:42 crc kubenswrapper[4886]: I0219 21:51:42.778474 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vdk6r" Feb 19 21:51:43 crc kubenswrapper[4886]: I0219 21:51:43.765838 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2tlbj" Feb 19 21:51:43 crc kubenswrapper[4886]: I0219 21:51:43.766520 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2tlbj" Feb 19 21:51:43 crc kubenswrapper[4886]: I0219 21:51:43.839353 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-vdk6r" podUID="8405924e-452f-490b-be92-a6e7cc928f4d" containerName="registry-server" probeResult="failure" output=< Feb 19 21:51:43 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 21:51:43 crc kubenswrapper[4886]: > Feb 19 21:51:44 crc kubenswrapper[4886]: I0219 21:51:44.819049 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-2tlbj" podUID="10c2539a-86fc-42f4-9a7a-5ce4bc595589" containerName="registry-server" probeResult="failure" output=< Feb 19 21:51:44 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 21:51:44 crc kubenswrapper[4886]: > Feb 19 21:51:50 crc kubenswrapper[4886]: I0219 21:51:50.616992 4886 scope.go:117] "RemoveContainer" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" Feb 19 21:51:50 crc kubenswrapper[4886]: E0219 21:51:50.617974 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:51:51 crc kubenswrapper[4886]: I0219 21:51:51.430646 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nzzqm" Feb 19 21:51:51 crc kubenswrapper[4886]: I0219 21:51:51.475999 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nzzqm" Feb 19 21:51:52 crc kubenswrapper[4886]: I0219 21:51:52.828416 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vdk6r" Feb 19 21:51:52 crc kubenswrapper[4886]: I0219 21:51:52.828756 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nzzqm"] Feb 19 21:51:52 crc kubenswrapper[4886]: I0219 21:51:52.830011 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nzzqm" podUID="2fa842a8-de52-42ff-92aa-647120396456" containerName="registry-server" containerID="cri-o://31b08b4976b276a323eed43962fc38bfc05d0b11a2d61b9c541ef11fc854b9ae" gracePeriod=2 Feb 19 21:51:52 crc kubenswrapper[4886]: I0219 21:51:52.877389 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vdk6r" Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.345378 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nzzqm" Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.437408 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4msn\" (UniqueName: \"kubernetes.io/projected/2fa842a8-de52-42ff-92aa-647120396456-kube-api-access-b4msn\") pod \"2fa842a8-de52-42ff-92aa-647120396456\" (UID: \"2fa842a8-de52-42ff-92aa-647120396456\") " Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.443598 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fa842a8-de52-42ff-92aa-647120396456-kube-api-access-b4msn" (OuterVolumeSpecName: "kube-api-access-b4msn") pod "2fa842a8-de52-42ff-92aa-647120396456" (UID: "2fa842a8-de52-42ff-92aa-647120396456"). InnerVolumeSpecName "kube-api-access-b4msn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.540434 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fa842a8-de52-42ff-92aa-647120396456-catalog-content\") pod \"2fa842a8-de52-42ff-92aa-647120396456\" (UID: \"2fa842a8-de52-42ff-92aa-647120396456\") " Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.540689 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fa842a8-de52-42ff-92aa-647120396456-utilities\") pod \"2fa842a8-de52-42ff-92aa-647120396456\" (UID: \"2fa842a8-de52-42ff-92aa-647120396456\") " Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.541554 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fa842a8-de52-42ff-92aa-647120396456-utilities" (OuterVolumeSpecName: "utilities") pod "2fa842a8-de52-42ff-92aa-647120396456" (UID: "2fa842a8-de52-42ff-92aa-647120396456"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.541779 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4msn\" (UniqueName: \"kubernetes.io/projected/2fa842a8-de52-42ff-92aa-647120396456-kube-api-access-b4msn\") on node \"crc\" DevicePath \"\"" Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.583180 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fa842a8-de52-42ff-92aa-647120396456-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fa842a8-de52-42ff-92aa-647120396456" (UID: "2fa842a8-de52-42ff-92aa-647120396456"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.607039 4886 generic.go:334] "Generic (PLEG): container finished" podID="2fa842a8-de52-42ff-92aa-647120396456" containerID="31b08b4976b276a323eed43962fc38bfc05d0b11a2d61b9c541ef11fc854b9ae" exitCode=0 Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.607117 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzzqm" event={"ID":"2fa842a8-de52-42ff-92aa-647120396456","Type":"ContainerDied","Data":"31b08b4976b276a323eed43962fc38bfc05d0b11a2d61b9c541ef11fc854b9ae"} Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.607147 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nzzqm" Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.607200 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzzqm" event={"ID":"2fa842a8-de52-42ff-92aa-647120396456","Type":"ContainerDied","Data":"dcf7a3776097a9601978fb2b39f1ebcf89e89a7eb56adc8924b030443c91488f"} Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.607317 4886 scope.go:117] "RemoveContainer" containerID="31b08b4976b276a323eed43962fc38bfc05d0b11a2d61b9c541ef11fc854b9ae" Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.630767 4886 scope.go:117] "RemoveContainer" containerID="9f8d534422286c96bbbbee5f24cac4c63be761c6bae9cec1cb3c437f3f38a839" Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.651832 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fa842a8-de52-42ff-92aa-647120396456-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.652057 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fa842a8-de52-42ff-92aa-647120396456-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.672473 4886 scope.go:117] "RemoveContainer" containerID="8b238ea430b812f8d847b6f19e89863c7a7665f69e66b703dcbcd3b552decef4" Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.677571 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nzzqm"] Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.698014 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nzzqm"] Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.709767 4886 scope.go:117] "RemoveContainer" containerID="31b08b4976b276a323eed43962fc38bfc05d0b11a2d61b9c541ef11fc854b9ae" Feb 19 21:51:53 crc kubenswrapper[4886]: E0219 21:51:53.718079 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31b08b4976b276a323eed43962fc38bfc05d0b11a2d61b9c541ef11fc854b9ae\": container with ID starting with 31b08b4976b276a323eed43962fc38bfc05d0b11a2d61b9c541ef11fc854b9ae not found: ID does not exist" containerID="31b08b4976b276a323eed43962fc38bfc05d0b11a2d61b9c541ef11fc854b9ae" Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.718137 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31b08b4976b276a323eed43962fc38bfc05d0b11a2d61b9c541ef11fc854b9ae"} err="failed to get container status \"31b08b4976b276a323eed43962fc38bfc05d0b11a2d61b9c541ef11fc854b9ae\": rpc error: code = NotFound desc = could not find container \"31b08b4976b276a323eed43962fc38bfc05d0b11a2d61b9c541ef11fc854b9ae\": container with ID starting with 31b08b4976b276a323eed43962fc38bfc05d0b11a2d61b9c541ef11fc854b9ae not found: ID does not exist" Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.718170 4886 scope.go:117] "RemoveContainer" containerID="9f8d534422286c96bbbbee5f24cac4c63be761c6bae9cec1cb3c437f3f38a839" Feb 19 21:51:53 crc kubenswrapper[4886]: E0219 21:51:53.718619 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f8d534422286c96bbbbee5f24cac4c63be761c6bae9cec1cb3c437f3f38a839\": container with ID starting with 9f8d534422286c96bbbbee5f24cac4c63be761c6bae9cec1cb3c437f3f38a839 not found: ID does not exist" containerID="9f8d534422286c96bbbbee5f24cac4c63be761c6bae9cec1cb3c437f3f38a839" Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.718713 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f8d534422286c96bbbbee5f24cac4c63be761c6bae9cec1cb3c437f3f38a839"} err="failed to get container status \"9f8d534422286c96bbbbee5f24cac4c63be761c6bae9cec1cb3c437f3f38a839\": rpc error: code = NotFound desc = could not find container \"9f8d534422286c96bbbbee5f24cac4c63be761c6bae9cec1cb3c437f3f38a839\": container with ID starting with 9f8d534422286c96bbbbee5f24cac4c63be761c6bae9cec1cb3c437f3f38a839 not found: ID does not exist" Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.718786 4886 scope.go:117] "RemoveContainer" containerID="8b238ea430b812f8d847b6f19e89863c7a7665f69e66b703dcbcd3b552decef4" Feb 19 21:51:53 crc kubenswrapper[4886]: E0219 21:51:53.719247 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b238ea430b812f8d847b6f19e89863c7a7665f69e66b703dcbcd3b552decef4\": container with ID starting with 8b238ea430b812f8d847b6f19e89863c7a7665f69e66b703dcbcd3b552decef4 not found: ID does not exist" containerID="8b238ea430b812f8d847b6f19e89863c7a7665f69e66b703dcbcd3b552decef4" Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.719380 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b238ea430b812f8d847b6f19e89863c7a7665f69e66b703dcbcd3b552decef4"} err="failed to get container status \"8b238ea430b812f8d847b6f19e89863c7a7665f69e66b703dcbcd3b552decef4\": rpc error: code = NotFound desc = could not find container \"8b238ea430b812f8d847b6f19e89863c7a7665f69e66b703dcbcd3b552decef4\": container with ID starting with 8b238ea430b812f8d847b6f19e89863c7a7665f69e66b703dcbcd3b552decef4 not found: ID does not exist" Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.826624 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2tlbj" Feb 19 21:51:53 crc kubenswrapper[4886]: I0219 21:51:53.879546 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2tlbj" Feb 19 21:51:54 crc kubenswrapper[4886]: I0219 21:51:54.615409 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fa842a8-de52-42ff-92aa-647120396456" path="/var/lib/kubelet/pods/2fa842a8-de52-42ff-92aa-647120396456/volumes" Feb 19 21:51:55 crc kubenswrapper[4886]: I0219 21:51:55.227971 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vdk6r"] Feb 19 21:51:55 crc kubenswrapper[4886]: I0219 21:51:55.228241 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vdk6r" podUID="8405924e-452f-490b-be92-a6e7cc928f4d" containerName="registry-server" containerID="cri-o://d0e21034dd8790ed8fd800ed9d3f4b4c60c3d5847aef931fbfa6ef91396c7ca3" gracePeriod=2 Feb 19 21:51:55 crc kubenswrapper[4886]: I0219 21:51:55.630440 4886 generic.go:334] "Generic (PLEG): container finished" podID="8405924e-452f-490b-be92-a6e7cc928f4d" containerID="d0e21034dd8790ed8fd800ed9d3f4b4c60c3d5847aef931fbfa6ef91396c7ca3" exitCode=0 Feb 19 21:51:55 crc kubenswrapper[4886]: I0219 21:51:55.630795 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdk6r" event={"ID":"8405924e-452f-490b-be92-a6e7cc928f4d","Type":"ContainerDied","Data":"d0e21034dd8790ed8fd800ed9d3f4b4c60c3d5847aef931fbfa6ef91396c7ca3"} Feb 19 21:51:55 crc kubenswrapper[4886]: I0219 21:51:55.787787 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vdk6r" Feb 19 21:51:55 crc kubenswrapper[4886]: I0219 21:51:55.807979 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8405924e-452f-490b-be92-a6e7cc928f4d-utilities\") pod \"8405924e-452f-490b-be92-a6e7cc928f4d\" (UID: \"8405924e-452f-490b-be92-a6e7cc928f4d\") " Feb 19 21:51:55 crc kubenswrapper[4886]: I0219 21:51:55.808389 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8405924e-452f-490b-be92-a6e7cc928f4d-catalog-content\") pod \"8405924e-452f-490b-be92-a6e7cc928f4d\" (UID: \"8405924e-452f-490b-be92-a6e7cc928f4d\") " Feb 19 21:51:55 crc kubenswrapper[4886]: I0219 21:51:55.808550 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z9w9\" (UniqueName: \"kubernetes.io/projected/8405924e-452f-490b-be92-a6e7cc928f4d-kube-api-access-2z9w9\") pod \"8405924e-452f-490b-be92-a6e7cc928f4d\" (UID: \"8405924e-452f-490b-be92-a6e7cc928f4d\") " Feb 19 21:51:55 crc kubenswrapper[4886]: I0219 21:51:55.808914 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8405924e-452f-490b-be92-a6e7cc928f4d-utilities" (OuterVolumeSpecName: "utilities") pod "8405924e-452f-490b-be92-a6e7cc928f4d" (UID: "8405924e-452f-490b-be92-a6e7cc928f4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:51:55 crc kubenswrapper[4886]: I0219 21:51:55.809874 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8405924e-452f-490b-be92-a6e7cc928f4d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:51:55 crc kubenswrapper[4886]: I0219 21:51:55.817773 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8405924e-452f-490b-be92-a6e7cc928f4d-kube-api-access-2z9w9" (OuterVolumeSpecName: "kube-api-access-2z9w9") pod "8405924e-452f-490b-be92-a6e7cc928f4d" (UID: "8405924e-452f-490b-be92-a6e7cc928f4d"). InnerVolumeSpecName "kube-api-access-2z9w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:51:55 crc kubenswrapper[4886]: I0219 21:51:55.836240 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8405924e-452f-490b-be92-a6e7cc928f4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8405924e-452f-490b-be92-a6e7cc928f4d" (UID: "8405924e-452f-490b-be92-a6e7cc928f4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:51:55 crc kubenswrapper[4886]: I0219 21:51:55.912409 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8405924e-452f-490b-be92-a6e7cc928f4d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:51:55 crc kubenswrapper[4886]: I0219 21:51:55.912461 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z9w9\" (UniqueName: \"kubernetes.io/projected/8405924e-452f-490b-be92-a6e7cc928f4d-kube-api-access-2z9w9\") on node \"crc\" DevicePath \"\"" Feb 19 21:51:56 crc kubenswrapper[4886]: I0219 21:51:56.658110 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdk6r" event={"ID":"8405924e-452f-490b-be92-a6e7cc928f4d","Type":"ContainerDied","Data":"ca78135f06ae3729af80381ae375d2b9752f7e37e28bd156fc15295ea813d91f"} Feb 19 21:51:56 crc kubenswrapper[4886]: I0219 21:51:56.658208 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vdk6r" Feb 19 21:51:56 crc kubenswrapper[4886]: I0219 21:51:56.658453 4886 scope.go:117] "RemoveContainer" containerID="d0e21034dd8790ed8fd800ed9d3f4b4c60c3d5847aef931fbfa6ef91396c7ca3" Feb 19 21:51:56 crc kubenswrapper[4886]: I0219 21:51:56.698428 4886 scope.go:117] "RemoveContainer" containerID="5c0ad614638b6e395761c9cc73622c5d165da768befcc2a77233d21d89486bf3" Feb 19 21:51:56 crc kubenswrapper[4886]: I0219 21:51:56.700296 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vdk6r"] Feb 19 21:51:56 crc kubenswrapper[4886]: I0219 21:51:56.714659 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vdk6r"] Feb 19 21:51:56 crc kubenswrapper[4886]: I0219 21:51:56.731344 4886 scope.go:117] "RemoveContainer" containerID="707372db124573d46348062bd1085394c494606b40161b136c0a7566a2782868" Feb 19 21:51:57 crc kubenswrapper[4886]: I0219 21:51:57.030485 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2tlbj"] Feb 19 21:51:57 crc kubenswrapper[4886]: I0219 21:51:57.030834 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2tlbj" podUID="10c2539a-86fc-42f4-9a7a-5ce4bc595589" containerName="registry-server" containerID="cri-o://e890c7ca7e0068728b96dbe6feed9242bb1d90d8abc62cca918e65f1e850760b" gracePeriod=2 Feb 19 21:51:57 crc kubenswrapper[4886]: I0219 21:51:57.559535 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2tlbj" Feb 19 21:51:57 crc kubenswrapper[4886]: I0219 21:51:57.669387 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10c2539a-86fc-42f4-9a7a-5ce4bc595589-catalog-content\") pod \"10c2539a-86fc-42f4-9a7a-5ce4bc595589\" (UID: \"10c2539a-86fc-42f4-9a7a-5ce4bc595589\") " Feb 19 21:51:57 crc kubenswrapper[4886]: I0219 21:51:57.669561 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ctk8\" (UniqueName: \"kubernetes.io/projected/10c2539a-86fc-42f4-9a7a-5ce4bc595589-kube-api-access-6ctk8\") pod \"10c2539a-86fc-42f4-9a7a-5ce4bc595589\" (UID: \"10c2539a-86fc-42f4-9a7a-5ce4bc595589\") " Feb 19 21:51:57 crc kubenswrapper[4886]: I0219 21:51:57.670204 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10c2539a-86fc-42f4-9a7a-5ce4bc595589-utilities\") pod \"10c2539a-86fc-42f4-9a7a-5ce4bc595589\" (UID: \"10c2539a-86fc-42f4-9a7a-5ce4bc595589\") " Feb 19 21:51:57 crc kubenswrapper[4886]: I0219 21:51:57.673083 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10c2539a-86fc-42f4-9a7a-5ce4bc595589-utilities" (OuterVolumeSpecName: "utilities") pod "10c2539a-86fc-42f4-9a7a-5ce4bc595589" (UID: "10c2539a-86fc-42f4-9a7a-5ce4bc595589"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:51:57 crc kubenswrapper[4886]: I0219 21:51:57.676977 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10c2539a-86fc-42f4-9a7a-5ce4bc595589-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:51:57 crc kubenswrapper[4886]: I0219 21:51:57.688353 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10c2539a-86fc-42f4-9a7a-5ce4bc595589-kube-api-access-6ctk8" (OuterVolumeSpecName: "kube-api-access-6ctk8") pod "10c2539a-86fc-42f4-9a7a-5ce4bc595589" (UID: "10c2539a-86fc-42f4-9a7a-5ce4bc595589"). InnerVolumeSpecName "kube-api-access-6ctk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:51:57 crc kubenswrapper[4886]: I0219 21:51:57.688881 4886 generic.go:334] "Generic (PLEG): container finished" podID="10c2539a-86fc-42f4-9a7a-5ce4bc595589" containerID="e890c7ca7e0068728b96dbe6feed9242bb1d90d8abc62cca918e65f1e850760b" exitCode=0 Feb 19 21:51:57 crc kubenswrapper[4886]: I0219 21:51:57.688974 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tlbj" event={"ID":"10c2539a-86fc-42f4-9a7a-5ce4bc595589","Type":"ContainerDied","Data":"e890c7ca7e0068728b96dbe6feed9242bb1d90d8abc62cca918e65f1e850760b"} Feb 19 21:51:57 crc kubenswrapper[4886]: I0219 21:51:57.689002 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tlbj" event={"ID":"10c2539a-86fc-42f4-9a7a-5ce4bc595589","Type":"ContainerDied","Data":"f37b22209f977a2ef7a7d4a9dfdc3f71e7d155d31dcd46508a895d49e3c376ad"} Feb 19 21:51:57 crc kubenswrapper[4886]: I0219 21:51:57.689019 4886 scope.go:117] "RemoveContainer" containerID="e890c7ca7e0068728b96dbe6feed9242bb1d90d8abc62cca918e65f1e850760b" Feb 19 21:51:57 crc kubenswrapper[4886]: I0219 21:51:57.689195 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2tlbj" Feb 19 21:51:57 crc kubenswrapper[4886]: I0219 21:51:57.721452 4886 scope.go:117] "RemoveContainer" containerID="c4eb97dca7accc6b8dd5ae2e6407b221a995967e2fadcde8f55fc8c481877543" Feb 19 21:51:57 crc kubenswrapper[4886]: I0219 21:51:57.722216 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10c2539a-86fc-42f4-9a7a-5ce4bc595589-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10c2539a-86fc-42f4-9a7a-5ce4bc595589" (UID: "10c2539a-86fc-42f4-9a7a-5ce4bc595589"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:51:57 crc kubenswrapper[4886]: I0219 21:51:57.780455 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10c2539a-86fc-42f4-9a7a-5ce4bc595589-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:51:57 crc kubenswrapper[4886]: I0219 21:51:57.780497 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ctk8\" (UniqueName: \"kubernetes.io/projected/10c2539a-86fc-42f4-9a7a-5ce4bc595589-kube-api-access-6ctk8\") on node \"crc\" DevicePath \"\"" Feb 19 21:51:57 crc kubenswrapper[4886]: I0219 21:51:57.802194 4886 scope.go:117] "RemoveContainer" containerID="ddfa1309c16cbc22260de067465247fb9e55002bf474bc1797d2e605f3811197" Feb 19 21:51:57 crc kubenswrapper[4886]: I0219 21:51:57.861672 4886 scope.go:117] "RemoveContainer" containerID="e890c7ca7e0068728b96dbe6feed9242bb1d90d8abc62cca918e65f1e850760b" Feb 19 21:51:57 crc kubenswrapper[4886]: E0219 21:51:57.862184 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e890c7ca7e0068728b96dbe6feed9242bb1d90d8abc62cca918e65f1e850760b\": container with ID starting with e890c7ca7e0068728b96dbe6feed9242bb1d90d8abc62cca918e65f1e850760b not found: ID does not exist" containerID="e890c7ca7e0068728b96dbe6feed9242bb1d90d8abc62cca918e65f1e850760b" Feb 19 21:51:57 crc kubenswrapper[4886]: I0219 21:51:57.862235 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e890c7ca7e0068728b96dbe6feed9242bb1d90d8abc62cca918e65f1e850760b"} err="failed to get container status \"e890c7ca7e0068728b96dbe6feed9242bb1d90d8abc62cca918e65f1e850760b\": rpc error: code = NotFound desc = could not find container \"e890c7ca7e0068728b96dbe6feed9242bb1d90d8abc62cca918e65f1e850760b\": container with ID starting with e890c7ca7e0068728b96dbe6feed9242bb1d90d8abc62cca918e65f1e850760b not found: ID does not exist" Feb 19 21:51:57 crc kubenswrapper[4886]: I0219 21:51:57.862280 4886 scope.go:117] "RemoveContainer" containerID="c4eb97dca7accc6b8dd5ae2e6407b221a995967e2fadcde8f55fc8c481877543" Feb 19 21:51:57 crc kubenswrapper[4886]: E0219 21:51:57.862786 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4eb97dca7accc6b8dd5ae2e6407b221a995967e2fadcde8f55fc8c481877543\": container with ID starting with c4eb97dca7accc6b8dd5ae2e6407b221a995967e2fadcde8f55fc8c481877543 not found: ID does not exist" containerID="c4eb97dca7accc6b8dd5ae2e6407b221a995967e2fadcde8f55fc8c481877543" Feb 19 21:51:57 crc kubenswrapper[4886]: I0219 21:51:57.862835 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4eb97dca7accc6b8dd5ae2e6407b221a995967e2fadcde8f55fc8c481877543"} err="failed to get container status \"c4eb97dca7accc6b8dd5ae2e6407b221a995967e2fadcde8f55fc8c481877543\": rpc error: code = NotFound desc = could not find container \"c4eb97dca7accc6b8dd5ae2e6407b221a995967e2fadcde8f55fc8c481877543\": container with ID starting with c4eb97dca7accc6b8dd5ae2e6407b221a995967e2fadcde8f55fc8c481877543 not found: ID does not exist" Feb 19 21:51:57 crc kubenswrapper[4886]: I0219 21:51:57.862863 4886 scope.go:117] "RemoveContainer" containerID="ddfa1309c16cbc22260de067465247fb9e55002bf474bc1797d2e605f3811197" Feb 19 21:51:57 crc kubenswrapper[4886]: E0219 21:51:57.863317 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddfa1309c16cbc22260de067465247fb9e55002bf474bc1797d2e605f3811197\": container with ID starting with ddfa1309c16cbc22260de067465247fb9e55002bf474bc1797d2e605f3811197 not found: ID does not exist" containerID="ddfa1309c16cbc22260de067465247fb9e55002bf474bc1797d2e605f3811197" Feb 19 21:51:57 crc kubenswrapper[4886]: I0219 21:51:57.863336 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddfa1309c16cbc22260de067465247fb9e55002bf474bc1797d2e605f3811197"} err="failed to get container status \"ddfa1309c16cbc22260de067465247fb9e55002bf474bc1797d2e605f3811197\": rpc error: code = NotFound desc = could not find container \"ddfa1309c16cbc22260de067465247fb9e55002bf474bc1797d2e605f3811197\": container with ID starting with ddfa1309c16cbc22260de067465247fb9e55002bf474bc1797d2e605f3811197 not found: ID does not exist" Feb 19 21:51:58 crc kubenswrapper[4886]: I0219 21:51:58.028126 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2tlbj"] Feb 19 21:51:58 crc kubenswrapper[4886]: I0219 21:51:58.037816 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2tlbj"] Feb 19 21:51:58 crc kubenswrapper[4886]: I0219 21:51:58.624898 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10c2539a-86fc-42f4-9a7a-5ce4bc595589" path="/var/lib/kubelet/pods/10c2539a-86fc-42f4-9a7a-5ce4bc595589/volumes" Feb 19 21:51:58 crc kubenswrapper[4886]: I0219 21:51:58.626279 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8405924e-452f-490b-be92-a6e7cc928f4d" path="/var/lib/kubelet/pods/8405924e-452f-490b-be92-a6e7cc928f4d/volumes" Feb 19 21:52:01 crc kubenswrapper[4886]: I0219 21:52:01.602421 4886 scope.go:117] "RemoveContainer" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" Feb 19 21:52:01 crc kubenswrapper[4886]: E0219 21:52:01.603189 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:52:12 crc kubenswrapper[4886]: I0219 21:52:12.601045 4886 scope.go:117] "RemoveContainer" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" Feb 19 21:52:12 crc kubenswrapper[4886]: E0219 21:52:12.601843 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:52:27 crc kubenswrapper[4886]: I0219 21:52:27.604546 4886 scope.go:117] "RemoveContainer" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" Feb 19 21:52:27 crc kubenswrapper[4886]: E0219 21:52:27.605815 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:52:42 crc kubenswrapper[4886]: I0219 21:52:42.602289 4886 scope.go:117] "RemoveContainer" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" Feb 19 21:52:42 crc kubenswrapper[4886]: E0219 21:52:42.603477 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:52:55 crc kubenswrapper[4886]: I0219 21:52:55.601052 4886 scope.go:117] "RemoveContainer" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" Feb 19 21:52:56 crc kubenswrapper[4886]: I0219 21:52:56.423444 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerStarted","Data":"d61801c6b678370f659bee7052e606b89e70995006ad746d73319453eaf76bac"} Feb 19 21:53:34 crc kubenswrapper[4886]: E0219 21:53:34.423256 4886 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.30:35174->38.102.83.30:40589: read tcp 38.102.83.30:35174->38.102.83.30:40589: read: connection reset by peer Feb 19 21:54:41 crc kubenswrapper[4886]: I0219 21:54:41.255646 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fbfz5"] Feb 19 21:54:41 crc kubenswrapper[4886]: E0219 21:54:41.257478 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8405924e-452f-490b-be92-a6e7cc928f4d" containerName="extract-utilities" Feb 19 21:54:41 crc kubenswrapper[4886]: I0219 21:54:41.257511 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8405924e-452f-490b-be92-a6e7cc928f4d" containerName="extract-utilities" Feb 19 21:54:41 crc kubenswrapper[4886]: E0219 21:54:41.257536 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8405924e-452f-490b-be92-a6e7cc928f4d" containerName="registry-server" Feb 19 21:54:41 crc kubenswrapper[4886]: I0219 21:54:41.257543 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8405924e-452f-490b-be92-a6e7cc928f4d" containerName="registry-server" Feb 19 21:54:41 crc kubenswrapper[4886]: E0219 21:54:41.257557 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8405924e-452f-490b-be92-a6e7cc928f4d" containerName="extract-content" Feb 19 21:54:41 crc kubenswrapper[4886]: I0219 21:54:41.257562 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8405924e-452f-490b-be92-a6e7cc928f4d" containerName="extract-content" Feb 19 21:54:41 crc kubenswrapper[4886]: E0219 21:54:41.257576 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa842a8-de52-42ff-92aa-647120396456" containerName="registry-server" Feb 19 21:54:41 crc kubenswrapper[4886]: I0219 21:54:41.257582 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa842a8-de52-42ff-92aa-647120396456" containerName="registry-server" Feb 19 21:54:41 crc kubenswrapper[4886]: E0219 21:54:41.257596 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c2539a-86fc-42f4-9a7a-5ce4bc595589" containerName="extract-utilities" Feb 19 21:54:41 crc kubenswrapper[4886]: I0219 21:54:41.257603 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c2539a-86fc-42f4-9a7a-5ce4bc595589" containerName="extract-utilities" Feb 19 21:54:41 crc kubenswrapper[4886]: E0219 21:54:41.257617 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa842a8-de52-42ff-92aa-647120396456" containerName="extract-utilities" Feb 19 21:54:41 crc kubenswrapper[4886]: I0219 21:54:41.257624 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa842a8-de52-42ff-92aa-647120396456" containerName="extract-utilities" Feb 19 21:54:41 crc kubenswrapper[4886]: E0219 21:54:41.257635 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa842a8-de52-42ff-92aa-647120396456" containerName="extract-content" Feb 19 21:54:41 crc kubenswrapper[4886]: I0219 21:54:41.257659 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa842a8-de52-42ff-92aa-647120396456" containerName="extract-content" Feb 19 21:54:41 crc kubenswrapper[4886]: E0219 21:54:41.257680 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c2539a-86fc-42f4-9a7a-5ce4bc595589" containerName="registry-server" Feb 19 21:54:41 crc kubenswrapper[4886]: I0219 21:54:41.257687 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c2539a-86fc-42f4-9a7a-5ce4bc595589" containerName="registry-server" Feb 19 21:54:41 crc kubenswrapper[4886]: E0219 21:54:41.257696 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c2539a-86fc-42f4-9a7a-5ce4bc595589" containerName="extract-content" Feb 19 21:54:41 crc kubenswrapper[4886]: I0219 21:54:41.257702 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c2539a-86fc-42f4-9a7a-5ce4bc595589" containerName="extract-content" Feb 19 21:54:41 crc kubenswrapper[4886]: I0219 21:54:41.257957 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="10c2539a-86fc-42f4-9a7a-5ce4bc595589" containerName="registry-server" Feb 19 21:54:41 crc kubenswrapper[4886]: I0219 21:54:41.257988 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa842a8-de52-42ff-92aa-647120396456" containerName="registry-server" Feb 19 21:54:41 crc kubenswrapper[4886]: I0219 21:54:41.258003 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="8405924e-452f-490b-be92-a6e7cc928f4d" containerName="registry-server" Feb 19 21:54:41 crc kubenswrapper[4886]: I0219 21:54:41.259794 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbfz5" Feb 19 21:54:41 crc kubenswrapper[4886]: I0219 21:54:41.277935 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fbfz5"] Feb 19 21:54:41 crc kubenswrapper[4886]: I0219 21:54:41.423784 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b68b330d-5d09-40cc-a1ee-c036321493c8-utilities\") pod \"redhat-operators-fbfz5\" (UID: \"b68b330d-5d09-40cc-a1ee-c036321493c8\") " pod="openshift-marketplace/redhat-operators-fbfz5" Feb 19 21:54:41 crc kubenswrapper[4886]: I0219 21:54:41.424408 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkrn9\" (UniqueName: \"kubernetes.io/projected/b68b330d-5d09-40cc-a1ee-c036321493c8-kube-api-access-wkrn9\") pod \"redhat-operators-fbfz5\" (UID: \"b68b330d-5d09-40cc-a1ee-c036321493c8\") " pod="openshift-marketplace/redhat-operators-fbfz5" Feb 19 21:54:41 crc kubenswrapper[4886]: I0219 21:54:41.424610 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b68b330d-5d09-40cc-a1ee-c036321493c8-catalog-content\") pod \"redhat-operators-fbfz5\" (UID: \"b68b330d-5d09-40cc-a1ee-c036321493c8\") " pod="openshift-marketplace/redhat-operators-fbfz5" Feb 19 21:54:41 crc kubenswrapper[4886]: I0219 21:54:41.526414 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkrn9\" (UniqueName: \"kubernetes.io/projected/b68b330d-5d09-40cc-a1ee-c036321493c8-kube-api-access-wkrn9\") pod \"redhat-operators-fbfz5\" (UID: \"b68b330d-5d09-40cc-a1ee-c036321493c8\") " pod="openshift-marketplace/redhat-operators-fbfz5" Feb 19 21:54:41 crc kubenswrapper[4886]: I0219 21:54:41.526510 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b68b330d-5d09-40cc-a1ee-c036321493c8-catalog-content\") pod \"redhat-operators-fbfz5\" (UID: \"b68b330d-5d09-40cc-a1ee-c036321493c8\") " pod="openshift-marketplace/redhat-operators-fbfz5" Feb 19 21:54:41 crc kubenswrapper[4886]: I0219 21:54:41.526585 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b68b330d-5d09-40cc-a1ee-c036321493c8-utilities\") pod \"redhat-operators-fbfz5\" (UID: \"b68b330d-5d09-40cc-a1ee-c036321493c8\") " pod="openshift-marketplace/redhat-operators-fbfz5" Feb 19 21:54:41 crc kubenswrapper[4886]: I0219 21:54:41.529431 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b68b330d-5d09-40cc-a1ee-c036321493c8-utilities\") pod \"redhat-operators-fbfz5\" (UID: \"b68b330d-5d09-40cc-a1ee-c036321493c8\") " pod="openshift-marketplace/redhat-operators-fbfz5" Feb 19 21:54:41 crc kubenswrapper[4886]: I0219 21:54:41.529967 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b68b330d-5d09-40cc-a1ee-c036321493c8-catalog-content\") pod \"redhat-operators-fbfz5\" (UID: \"b68b330d-5d09-40cc-a1ee-c036321493c8\") " pod="openshift-marketplace/redhat-operators-fbfz5" Feb 19 21:54:41 crc kubenswrapper[4886]: I0219 21:54:41.556995 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkrn9\" (UniqueName: \"kubernetes.io/projected/b68b330d-5d09-40cc-a1ee-c036321493c8-kube-api-access-wkrn9\") pod \"redhat-operators-fbfz5\" (UID: \"b68b330d-5d09-40cc-a1ee-c036321493c8\") " pod="openshift-marketplace/redhat-operators-fbfz5" Feb 19 21:54:41 crc kubenswrapper[4886]: I0219 21:54:41.589292 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbfz5" Feb 19 21:54:42 crc kubenswrapper[4886]: I0219 21:54:42.068932 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fbfz5"] Feb 19 21:54:42 crc kubenswrapper[4886]: I0219 21:54:42.701813 4886 generic.go:334] "Generic (PLEG): container finished" podID="b68b330d-5d09-40cc-a1ee-c036321493c8" containerID="38404fabd113661626b594235255aed021a3b93d87040b311cdab0d81a971b07" exitCode=0 Feb 19 21:54:42 crc kubenswrapper[4886]: I0219 21:54:42.701927 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbfz5" event={"ID":"b68b330d-5d09-40cc-a1ee-c036321493c8","Type":"ContainerDied","Data":"38404fabd113661626b594235255aed021a3b93d87040b311cdab0d81a971b07"} Feb 19 21:54:42 crc kubenswrapper[4886]: I0219 21:54:42.702103 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbfz5" event={"ID":"b68b330d-5d09-40cc-a1ee-c036321493c8","Type":"ContainerStarted","Data":"dc1901855627631a0bcbc2c7d171ce6e133f0e6a7062c310dc970afa0e0de053"} Feb 19 21:54:42 crc kubenswrapper[4886]: I0219 21:54:42.710943 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 21:54:44 crc kubenswrapper[4886]: I0219 21:54:44.732988 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbfz5" event={"ID":"b68b330d-5d09-40cc-a1ee-c036321493c8","Type":"ContainerStarted","Data":"b2d07617ebca80f2a831e42b9a5ffc6f70f3ca0c10c399490eb101834c5a8e6a"} Feb 19 21:54:48 crc kubenswrapper[4886]: I0219 21:54:48.783468 4886 generic.go:334] "Generic (PLEG): container finished" podID="b68b330d-5d09-40cc-a1ee-c036321493c8" containerID="b2d07617ebca80f2a831e42b9a5ffc6f70f3ca0c10c399490eb101834c5a8e6a" exitCode=0 Feb 19 21:54:48 crc kubenswrapper[4886]: I0219 21:54:48.783601 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbfz5" event={"ID":"b68b330d-5d09-40cc-a1ee-c036321493c8","Type":"ContainerDied","Data":"b2d07617ebca80f2a831e42b9a5ffc6f70f3ca0c10c399490eb101834c5a8e6a"} Feb 19 21:54:49 crc kubenswrapper[4886]: I0219 21:54:49.800984 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbfz5" event={"ID":"b68b330d-5d09-40cc-a1ee-c036321493c8","Type":"ContainerStarted","Data":"980e109a512d9687fabafcca0ed421dbc63d95678384fab374db8f614f0aac51"} Feb 19 21:54:49 crc kubenswrapper[4886]: I0219 21:54:49.824001 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fbfz5" podStartSLOduration=2.242057348 podStartE2EDuration="8.823980274s" podCreationTimestamp="2026-02-19 21:54:41 +0000 UTC" firstStartedPulling="2026-02-19 21:54:42.707997194 +0000 UTC m=+3313.335840244" lastFinishedPulling="2026-02-19 21:54:49.28992012 +0000 UTC m=+3319.917763170" observedRunningTime="2026-02-19 21:54:49.818246202 +0000 UTC m=+3320.446089262" watchObservedRunningTime="2026-02-19 21:54:49.823980274 +0000 UTC m=+3320.451823324" Feb 19 21:54:51 crc kubenswrapper[4886]: I0219 21:54:51.589963 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fbfz5" Feb 19 21:54:51 crc kubenswrapper[4886]: I0219 21:54:51.590545 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fbfz5" Feb 19 21:54:52 crc kubenswrapper[4886]: I0219 21:54:52.649421 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fbfz5" podUID="b68b330d-5d09-40cc-a1ee-c036321493c8" containerName="registry-server" probeResult="failure" output=< Feb 19 21:54:52 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 21:54:52 crc kubenswrapper[4886]: > Feb 19 21:54:55 crc kubenswrapper[4886]: E0219 21:54:55.098780 4886 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.30:54442->38.102.83.30:40589: write tcp 38.102.83.30:54442->38.102.83.30:40589: write: broken pipe Feb 19 21:55:02 crc kubenswrapper[4886]: I0219 21:55:02.644324 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fbfz5" podUID="b68b330d-5d09-40cc-a1ee-c036321493c8" containerName="registry-server" probeResult="failure" output=< Feb 19 21:55:02 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 21:55:02 crc kubenswrapper[4886]: > Feb 19 21:55:11 crc kubenswrapper[4886]: I0219 21:55:11.642245 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fbfz5" Feb 19 21:55:11 crc kubenswrapper[4886]: I0219 21:55:11.709564 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fbfz5" Feb 19 21:55:12 crc kubenswrapper[4886]: I0219 21:55:12.474012 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fbfz5"] Feb 19 21:55:13 crc kubenswrapper[4886]: I0219 21:55:13.065358 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fbfz5" podUID="b68b330d-5d09-40cc-a1ee-c036321493c8" containerName="registry-server" containerID="cri-o://980e109a512d9687fabafcca0ed421dbc63d95678384fab374db8f614f0aac51" gracePeriod=2 Feb 19 21:55:13 crc kubenswrapper[4886]: I0219 21:55:13.670840 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbfz5" Feb 19 21:55:13 crc kubenswrapper[4886]: I0219 21:55:13.725698 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b68b330d-5d09-40cc-a1ee-c036321493c8-catalog-content\") pod \"b68b330d-5d09-40cc-a1ee-c036321493c8\" (UID: \"b68b330d-5d09-40cc-a1ee-c036321493c8\") " Feb 19 21:55:13 crc kubenswrapper[4886]: I0219 21:55:13.725881 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkrn9\" (UniqueName: \"kubernetes.io/projected/b68b330d-5d09-40cc-a1ee-c036321493c8-kube-api-access-wkrn9\") pod \"b68b330d-5d09-40cc-a1ee-c036321493c8\" (UID: \"b68b330d-5d09-40cc-a1ee-c036321493c8\") " Feb 19 21:55:13 crc kubenswrapper[4886]: I0219 21:55:13.726202 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b68b330d-5d09-40cc-a1ee-c036321493c8-utilities\") pod \"b68b330d-5d09-40cc-a1ee-c036321493c8\" (UID: \"b68b330d-5d09-40cc-a1ee-c036321493c8\") " Feb 19 21:55:13 crc kubenswrapper[4886]: I0219 21:55:13.729006 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b68b330d-5d09-40cc-a1ee-c036321493c8-utilities" (OuterVolumeSpecName: "utilities") pod "b68b330d-5d09-40cc-a1ee-c036321493c8" (UID: "b68b330d-5d09-40cc-a1ee-c036321493c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:55:13 crc kubenswrapper[4886]: I0219 21:55:13.734470 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b68b330d-5d09-40cc-a1ee-c036321493c8-kube-api-access-wkrn9" (OuterVolumeSpecName: "kube-api-access-wkrn9") pod "b68b330d-5d09-40cc-a1ee-c036321493c8" (UID: "b68b330d-5d09-40cc-a1ee-c036321493c8"). InnerVolumeSpecName "kube-api-access-wkrn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:55:13 crc kubenswrapper[4886]: I0219 21:55:13.829988 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b68b330d-5d09-40cc-a1ee-c036321493c8-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:55:13 crc kubenswrapper[4886]: I0219 21:55:13.830019 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkrn9\" (UniqueName: \"kubernetes.io/projected/b68b330d-5d09-40cc-a1ee-c036321493c8-kube-api-access-wkrn9\") on node \"crc\" DevicePath \"\"" Feb 19 21:55:13 crc kubenswrapper[4886]: I0219 21:55:13.857051 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b68b330d-5d09-40cc-a1ee-c036321493c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b68b330d-5d09-40cc-a1ee-c036321493c8" (UID: "b68b330d-5d09-40cc-a1ee-c036321493c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:55:13 crc kubenswrapper[4886]: I0219 21:55:13.932427 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b68b330d-5d09-40cc-a1ee-c036321493c8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:55:14 crc kubenswrapper[4886]: I0219 21:55:14.082139 4886 generic.go:334] "Generic (PLEG): container finished" podID="b68b330d-5d09-40cc-a1ee-c036321493c8" containerID="980e109a512d9687fabafcca0ed421dbc63d95678384fab374db8f614f0aac51" exitCode=0 Feb 19 21:55:14 crc kubenswrapper[4886]: I0219 21:55:14.082207 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbfz5" event={"ID":"b68b330d-5d09-40cc-a1ee-c036321493c8","Type":"ContainerDied","Data":"980e109a512d9687fabafcca0ed421dbc63d95678384fab374db8f614f0aac51"} Feb 19 21:55:14 crc kubenswrapper[4886]: I0219 21:55:14.082271 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbfz5" event={"ID":"b68b330d-5d09-40cc-a1ee-c036321493c8","Type":"ContainerDied","Data":"dc1901855627631a0bcbc2c7d171ce6e133f0e6a7062c310dc970afa0e0de053"} Feb 19 21:55:14 crc kubenswrapper[4886]: I0219 21:55:14.082302 4886 scope.go:117] "RemoveContainer" containerID="980e109a512d9687fabafcca0ed421dbc63d95678384fab374db8f614f0aac51" Feb 19 21:55:14 crc kubenswrapper[4886]: I0219 21:55:14.083642 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbfz5" Feb 19 21:55:14 crc kubenswrapper[4886]: I0219 21:55:14.111555 4886 scope.go:117] "RemoveContainer" containerID="b2d07617ebca80f2a831e42b9a5ffc6f70f3ca0c10c399490eb101834c5a8e6a" Feb 19 21:55:14 crc kubenswrapper[4886]: I0219 21:55:14.123206 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fbfz5"] Feb 19 21:55:14 crc kubenswrapper[4886]: I0219 21:55:14.136038 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fbfz5"] Feb 19 21:55:14 crc kubenswrapper[4886]: I0219 21:55:14.145533 4886 scope.go:117] "RemoveContainer" containerID="38404fabd113661626b594235255aed021a3b93d87040b311cdab0d81a971b07" Feb 19 21:55:14 crc kubenswrapper[4886]: I0219 21:55:14.201172 4886 scope.go:117] "RemoveContainer" containerID="980e109a512d9687fabafcca0ed421dbc63d95678384fab374db8f614f0aac51" Feb 19 21:55:14 crc kubenswrapper[4886]: E0219 21:55:14.201816 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"980e109a512d9687fabafcca0ed421dbc63d95678384fab374db8f614f0aac51\": container with ID starting with 980e109a512d9687fabafcca0ed421dbc63d95678384fab374db8f614f0aac51 not found: ID does not exist" containerID="980e109a512d9687fabafcca0ed421dbc63d95678384fab374db8f614f0aac51" Feb 19 21:55:14 crc kubenswrapper[4886]: I0219 21:55:14.201880 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"980e109a512d9687fabafcca0ed421dbc63d95678384fab374db8f614f0aac51"} err="failed to get container status \"980e109a512d9687fabafcca0ed421dbc63d95678384fab374db8f614f0aac51\": rpc error: code = NotFound desc = could not find container \"980e109a512d9687fabafcca0ed421dbc63d95678384fab374db8f614f0aac51\": container with ID starting with 980e109a512d9687fabafcca0ed421dbc63d95678384fab374db8f614f0aac51 not found: ID does not exist" Feb 19 21:55:14 crc kubenswrapper[4886]: I0219 21:55:14.201918 4886 scope.go:117] "RemoveContainer" containerID="b2d07617ebca80f2a831e42b9a5ffc6f70f3ca0c10c399490eb101834c5a8e6a" Feb 19 21:55:14 crc kubenswrapper[4886]: E0219 21:55:14.202249 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2d07617ebca80f2a831e42b9a5ffc6f70f3ca0c10c399490eb101834c5a8e6a\": container with ID starting with b2d07617ebca80f2a831e42b9a5ffc6f70f3ca0c10c399490eb101834c5a8e6a not found: ID does not exist" containerID="b2d07617ebca80f2a831e42b9a5ffc6f70f3ca0c10c399490eb101834c5a8e6a" Feb 19 21:55:14 crc kubenswrapper[4886]: I0219 21:55:14.202307 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2d07617ebca80f2a831e42b9a5ffc6f70f3ca0c10c399490eb101834c5a8e6a"} err="failed to get container status \"b2d07617ebca80f2a831e42b9a5ffc6f70f3ca0c10c399490eb101834c5a8e6a\": rpc error: code = NotFound desc = could not find container \"b2d07617ebca80f2a831e42b9a5ffc6f70f3ca0c10c399490eb101834c5a8e6a\": container with ID starting with b2d07617ebca80f2a831e42b9a5ffc6f70f3ca0c10c399490eb101834c5a8e6a not found: ID does not exist" Feb 19 21:55:14 crc kubenswrapper[4886]: I0219 21:55:14.202356 4886 scope.go:117] "RemoveContainer" containerID="38404fabd113661626b594235255aed021a3b93d87040b311cdab0d81a971b07" Feb 19 21:55:14 crc kubenswrapper[4886]: E0219 21:55:14.204137 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38404fabd113661626b594235255aed021a3b93d87040b311cdab0d81a971b07\": container with ID starting with 38404fabd113661626b594235255aed021a3b93d87040b311cdab0d81a971b07 not found: ID does not exist" containerID="38404fabd113661626b594235255aed021a3b93d87040b311cdab0d81a971b07" Feb 19 21:55:14 crc kubenswrapper[4886]: I0219 21:55:14.204175 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38404fabd113661626b594235255aed021a3b93d87040b311cdab0d81a971b07"} err="failed to get container status \"38404fabd113661626b594235255aed021a3b93d87040b311cdab0d81a971b07\": rpc error: code = NotFound desc = could not find container \"38404fabd113661626b594235255aed021a3b93d87040b311cdab0d81a971b07\": container with ID starting with 38404fabd113661626b594235255aed021a3b93d87040b311cdab0d81a971b07 not found: ID does not exist" Feb 19 21:55:14 crc kubenswrapper[4886]: I0219 21:55:14.624365 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b68b330d-5d09-40cc-a1ee-c036321493c8" path="/var/lib/kubelet/pods/b68b330d-5d09-40cc-a1ee-c036321493c8/volumes" Feb 19 21:55:18 crc kubenswrapper[4886]: I0219 21:55:18.324713 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:55:18 crc kubenswrapper[4886]: I0219 21:55:18.325534 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:55:48 crc kubenswrapper[4886]: I0219 21:55:48.325131 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:55:48 crc kubenswrapper[4886]: I0219 21:55:48.325844 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:56:18 crc kubenswrapper[4886]: I0219 21:56:18.325179 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:56:18 crc kubenswrapper[4886]: I0219 21:56:18.325724 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:56:18 crc kubenswrapper[4886]: I0219 21:56:18.325768 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 21:56:18 crc kubenswrapper[4886]: I0219 21:56:18.326718 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d61801c6b678370f659bee7052e606b89e70995006ad746d73319453eaf76bac"} pod="openshift-machine-config-operator/machine-config-daemon-6stm5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 21:56:18 crc kubenswrapper[4886]: I0219 21:56:18.326762 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" containerID="cri-o://d61801c6b678370f659bee7052e606b89e70995006ad746d73319453eaf76bac" gracePeriod=600 Feb 19 21:56:18 crc kubenswrapper[4886]: I0219 21:56:18.881243 4886 generic.go:334] "Generic (PLEG): container finished" podID="b096c32d-4192-4529-bc55-b05d09004007" containerID="d61801c6b678370f659bee7052e606b89e70995006ad746d73319453eaf76bac" exitCode=0 Feb 19 21:56:18 crc kubenswrapper[4886]: I0219 21:56:18.881362 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerDied","Data":"d61801c6b678370f659bee7052e606b89e70995006ad746d73319453eaf76bac"} Feb 19 21:56:18 crc kubenswrapper[4886]: I0219 21:56:18.881629 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerStarted","Data":"71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b"} Feb 19 21:56:18 crc kubenswrapper[4886]: I0219 21:56:18.881653 4886 scope.go:117] "RemoveContainer" containerID="5fad8506e8e61b71168435c6aad40d5aab1391b96e32e4b86d91fa70e3e6e7ed" Feb 19 21:58:18 crc kubenswrapper[4886]: I0219 21:58:18.324473 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:58:18 crc kubenswrapper[4886]: I0219 21:58:18.326065 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:58:48 crc kubenswrapper[4886]: I0219 21:58:48.324472 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:58:48 crc kubenswrapper[4886]: I0219 21:58:48.326022 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:59:18 crc kubenswrapper[4886]: I0219 21:59:18.325055 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:59:18 crc kubenswrapper[4886]: I0219 21:59:18.325710 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:59:18 crc kubenswrapper[4886]: I0219 21:59:18.325772 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 21:59:18 crc kubenswrapper[4886]: I0219 21:59:18.326879 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b"} pod="openshift-machine-config-operator/machine-config-daemon-6stm5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 21:59:18 crc kubenswrapper[4886]: I0219 21:59:18.326963 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" containerID="cri-o://71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" gracePeriod=600 Feb 19 21:59:18 crc kubenswrapper[4886]: E0219 21:59:18.457055 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:59:19 crc kubenswrapper[4886]: I0219 21:59:19.082429 4886 generic.go:334] "Generic (PLEG): container finished" podID="b096c32d-4192-4529-bc55-b05d09004007" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" exitCode=0 Feb 19 21:59:19 crc kubenswrapper[4886]: I0219 21:59:19.082501 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerDied","Data":"71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b"} Feb 19 21:59:19 crc kubenswrapper[4886]: I0219 21:59:19.082584 4886 scope.go:117] "RemoveContainer" containerID="d61801c6b678370f659bee7052e606b89e70995006ad746d73319453eaf76bac" Feb 19 21:59:19 crc kubenswrapper[4886]: I0219 21:59:19.083329 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 21:59:19 crc kubenswrapper[4886]: E0219 21:59:19.083720 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:59:30 crc kubenswrapper[4886]: I0219 21:59:30.612164 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 21:59:30 crc kubenswrapper[4886]: E0219 21:59:30.615341 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:59:43 crc kubenswrapper[4886]: I0219 21:59:43.601675 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 21:59:43 crc kubenswrapper[4886]: E0219 21:59:43.602730 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 21:59:56 crc kubenswrapper[4886]: I0219 21:59:56.601438 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 21:59:56 crc kubenswrapper[4886]: E0219 21:59:56.602238 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:00:00 crc kubenswrapper[4886]: I0219 22:00:00.186530 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525640-khpbc"] Feb 19 22:00:00 crc kubenswrapper[4886]: E0219 22:00:00.187697 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b68b330d-5d09-40cc-a1ee-c036321493c8" containerName="extract-utilities" Feb 19 22:00:00 crc kubenswrapper[4886]: I0219 22:00:00.187726 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b68b330d-5d09-40cc-a1ee-c036321493c8" containerName="extract-utilities" Feb 19 22:00:00 crc kubenswrapper[4886]: E0219 22:00:00.187779 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b68b330d-5d09-40cc-a1ee-c036321493c8" containerName="registry-server" Feb 19 22:00:00 crc kubenswrapper[4886]: I0219 22:00:00.187794 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b68b330d-5d09-40cc-a1ee-c036321493c8" containerName="registry-server" Feb 19 22:00:00 crc kubenswrapper[4886]: E0219 22:00:00.187845 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b68b330d-5d09-40cc-a1ee-c036321493c8" containerName="extract-content" Feb 19 22:00:00 crc kubenswrapper[4886]: I0219 22:00:00.187858 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b68b330d-5d09-40cc-a1ee-c036321493c8" containerName="extract-content" Feb 19 22:00:00 crc kubenswrapper[4886]: I0219 22:00:00.188209 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="b68b330d-5d09-40cc-a1ee-c036321493c8" containerName="registry-server" Feb 19 22:00:00 crc kubenswrapper[4886]: I0219 22:00:00.189609 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525640-khpbc" Feb 19 22:00:00 crc kubenswrapper[4886]: I0219 22:00:00.191610 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 22:00:00 crc kubenswrapper[4886]: I0219 22:00:00.194046 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 22:00:00 crc kubenswrapper[4886]: I0219 22:00:00.199508 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525640-khpbc"] Feb 19 22:00:00 crc kubenswrapper[4886]: I0219 22:00:00.283617 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec4940ba-3199-4ea3-be43-d2c9057342cb-secret-volume\") pod \"collect-profiles-29525640-khpbc\" (UID: \"ec4940ba-3199-4ea3-be43-d2c9057342cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525640-khpbc" Feb 19 22:00:00 crc kubenswrapper[4886]: I0219 22:00:00.283711 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx8hd\" (UniqueName: \"kubernetes.io/projected/ec4940ba-3199-4ea3-be43-d2c9057342cb-kube-api-access-fx8hd\") pod \"collect-profiles-29525640-khpbc\" (UID: \"ec4940ba-3199-4ea3-be43-d2c9057342cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525640-khpbc" Feb 19 22:00:00 crc kubenswrapper[4886]: I0219 22:00:00.283900 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec4940ba-3199-4ea3-be43-d2c9057342cb-config-volume\") pod \"collect-profiles-29525640-khpbc\" (UID: \"ec4940ba-3199-4ea3-be43-d2c9057342cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525640-khpbc" Feb 19 22:00:00 crc kubenswrapper[4886]: I0219 22:00:00.386436 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec4940ba-3199-4ea3-be43-d2c9057342cb-secret-volume\") pod \"collect-profiles-29525640-khpbc\" (UID: \"ec4940ba-3199-4ea3-be43-d2c9057342cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525640-khpbc" Feb 19 22:00:00 crc kubenswrapper[4886]: I0219 22:00:00.386810 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx8hd\" (UniqueName: \"kubernetes.io/projected/ec4940ba-3199-4ea3-be43-d2c9057342cb-kube-api-access-fx8hd\") pod \"collect-profiles-29525640-khpbc\" (UID: \"ec4940ba-3199-4ea3-be43-d2c9057342cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525640-khpbc" Feb 19 22:00:00 crc kubenswrapper[4886]: I0219 22:00:00.386973 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec4940ba-3199-4ea3-be43-d2c9057342cb-config-volume\") pod \"collect-profiles-29525640-khpbc\" (UID: \"ec4940ba-3199-4ea3-be43-d2c9057342cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525640-khpbc" Feb 19 22:00:00 crc kubenswrapper[4886]: I0219 22:00:00.387889 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec4940ba-3199-4ea3-be43-d2c9057342cb-config-volume\") pod \"collect-profiles-29525640-khpbc\" (UID: \"ec4940ba-3199-4ea3-be43-d2c9057342cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525640-khpbc" Feb 19 22:00:00 crc kubenswrapper[4886]: I0219 22:00:00.394374 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec4940ba-3199-4ea3-be43-d2c9057342cb-secret-volume\") pod \"collect-profiles-29525640-khpbc\" (UID: \"ec4940ba-3199-4ea3-be43-d2c9057342cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525640-khpbc" Feb 19 22:00:00 crc kubenswrapper[4886]: I0219 22:00:00.404368 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx8hd\" (UniqueName: \"kubernetes.io/projected/ec4940ba-3199-4ea3-be43-d2c9057342cb-kube-api-access-fx8hd\") pod \"collect-profiles-29525640-khpbc\" (UID: \"ec4940ba-3199-4ea3-be43-d2c9057342cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525640-khpbc" Feb 19 22:00:00 crc kubenswrapper[4886]: I0219 22:00:00.513833 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525640-khpbc" Feb 19 22:00:01 crc kubenswrapper[4886]: I0219 22:00:01.061653 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525640-khpbc"] Feb 19 22:00:01 crc kubenswrapper[4886]: I0219 22:00:01.575128 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525640-khpbc" event={"ID":"ec4940ba-3199-4ea3-be43-d2c9057342cb","Type":"ContainerStarted","Data":"b990b7a6c1d2b9322f851a5d0c97215283375f568a6dfc135db835c782f48cc4"} Feb 19 22:00:01 crc kubenswrapper[4886]: I0219 22:00:01.575589 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525640-khpbc" event={"ID":"ec4940ba-3199-4ea3-be43-d2c9057342cb","Type":"ContainerStarted","Data":"628f96da8af34bbf24af73f551f1daa2880bc44d898223ad8c89849c977eac51"} Feb 19 22:00:01 crc kubenswrapper[4886]: I0219 22:00:01.603419 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29525640-khpbc" podStartSLOduration=1.603391129 podStartE2EDuration="1.603391129s" podCreationTimestamp="2026-02-19 22:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 22:00:01.591512166 +0000 UTC m=+3632.219355226" watchObservedRunningTime="2026-02-19 22:00:01.603391129 +0000 UTC m=+3632.231234209" Feb 19 22:00:02 crc kubenswrapper[4886]: I0219 22:00:02.596861 4886 generic.go:334] "Generic (PLEG): container finished" podID="ec4940ba-3199-4ea3-be43-d2c9057342cb" containerID="b990b7a6c1d2b9322f851a5d0c97215283375f568a6dfc135db835c782f48cc4" exitCode=0 Feb 19 22:00:02 crc kubenswrapper[4886]: I0219 22:00:02.596932 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525640-khpbc" event={"ID":"ec4940ba-3199-4ea3-be43-d2c9057342cb","Type":"ContainerDied","Data":"b990b7a6c1d2b9322f851a5d0c97215283375f568a6dfc135db835c782f48cc4"} Feb 19 22:00:04 crc kubenswrapper[4886]: I0219 22:00:04.036551 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525640-khpbc" Feb 19 22:00:04 crc kubenswrapper[4886]: I0219 22:00:04.088458 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec4940ba-3199-4ea3-be43-d2c9057342cb-secret-volume\") pod \"ec4940ba-3199-4ea3-be43-d2c9057342cb\" (UID: \"ec4940ba-3199-4ea3-be43-d2c9057342cb\") " Feb 19 22:00:04 crc kubenswrapper[4886]: I0219 22:00:04.088799 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx8hd\" (UniqueName: \"kubernetes.io/projected/ec4940ba-3199-4ea3-be43-d2c9057342cb-kube-api-access-fx8hd\") pod \"ec4940ba-3199-4ea3-be43-d2c9057342cb\" (UID: \"ec4940ba-3199-4ea3-be43-d2c9057342cb\") " Feb 19 22:00:04 crc kubenswrapper[4886]: I0219 22:00:04.088834 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec4940ba-3199-4ea3-be43-d2c9057342cb-config-volume\") pod \"ec4940ba-3199-4ea3-be43-d2c9057342cb\" (UID: \"ec4940ba-3199-4ea3-be43-d2c9057342cb\") " Feb 19 22:00:04 crc kubenswrapper[4886]: I0219 22:00:04.089923 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec4940ba-3199-4ea3-be43-d2c9057342cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "ec4940ba-3199-4ea3-be43-d2c9057342cb" (UID: "ec4940ba-3199-4ea3-be43-d2c9057342cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 22:00:04 crc kubenswrapper[4886]: I0219 22:00:04.095176 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec4940ba-3199-4ea3-be43-d2c9057342cb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ec4940ba-3199-4ea3-be43-d2c9057342cb" (UID: "ec4940ba-3199-4ea3-be43-d2c9057342cb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 22:00:04 crc kubenswrapper[4886]: I0219 22:00:04.095624 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec4940ba-3199-4ea3-be43-d2c9057342cb-kube-api-access-fx8hd" (OuterVolumeSpecName: "kube-api-access-fx8hd") pod "ec4940ba-3199-4ea3-be43-d2c9057342cb" (UID: "ec4940ba-3199-4ea3-be43-d2c9057342cb"). InnerVolumeSpecName "kube-api-access-fx8hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 22:00:04 crc kubenswrapper[4886]: I0219 22:00:04.191614 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx8hd\" (UniqueName: \"kubernetes.io/projected/ec4940ba-3199-4ea3-be43-d2c9057342cb-kube-api-access-fx8hd\") on node \"crc\" DevicePath \"\"" Feb 19 22:00:04 crc kubenswrapper[4886]: I0219 22:00:04.191664 4886 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec4940ba-3199-4ea3-be43-d2c9057342cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 22:00:04 crc kubenswrapper[4886]: I0219 22:00:04.191678 4886 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec4940ba-3199-4ea3-be43-d2c9057342cb-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 22:00:04 crc kubenswrapper[4886]: I0219 22:00:04.627430 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525640-khpbc" event={"ID":"ec4940ba-3199-4ea3-be43-d2c9057342cb","Type":"ContainerDied","Data":"628f96da8af34bbf24af73f551f1daa2880bc44d898223ad8c89849c977eac51"} Feb 19 22:00:04 crc kubenswrapper[4886]: I0219 22:00:04.627469 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="628f96da8af34bbf24af73f551f1daa2880bc44d898223ad8c89849c977eac51" Feb 19 22:00:04 crc kubenswrapper[4886]: I0219 22:00:04.627504 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525640-khpbc" Feb 19 22:00:04 crc kubenswrapper[4886]: I0219 22:00:04.682859 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525595-9kljn"] Feb 19 22:00:04 crc kubenswrapper[4886]: I0219 22:00:04.694396 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525595-9kljn"] Feb 19 22:00:06 crc kubenswrapper[4886]: I0219 22:00:06.628512 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9dbcc8a-fdec-4646-b231-8f5f8bce060d" path="/var/lib/kubelet/pods/a9dbcc8a-fdec-4646-b231-8f5f8bce060d/volumes" Feb 19 22:00:10 crc kubenswrapper[4886]: I0219 22:00:10.608564 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 22:00:10 crc kubenswrapper[4886]: E0219 22:00:10.609380 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:00:23 crc kubenswrapper[4886]: I0219 22:00:23.601559 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 22:00:23 crc kubenswrapper[4886]: E0219 22:00:23.604043 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:00:34 crc kubenswrapper[4886]: I0219 22:00:34.601224 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 22:00:34 crc kubenswrapper[4886]: E0219 22:00:34.602047 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:00:46 crc kubenswrapper[4886]: I0219 22:00:46.601612 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 22:00:46 crc kubenswrapper[4886]: E0219 22:00:46.602455 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:00:55 crc kubenswrapper[4886]: I0219 22:00:55.565792 4886 scope.go:117] "RemoveContainer" containerID="088a70e9f76feb2ddd4c405ba5168f1819004407f12182a81daac0e2d86c2a3c" Feb 19 22:01:00 crc kubenswrapper[4886]: I0219 22:01:00.174152 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29525641-dxqhv"] Feb 19 22:01:00 crc kubenswrapper[4886]: E0219 22:01:00.175322 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec4940ba-3199-4ea3-be43-d2c9057342cb" containerName="collect-profiles" Feb 19 22:01:00 crc kubenswrapper[4886]: I0219 22:01:00.175340 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec4940ba-3199-4ea3-be43-d2c9057342cb" containerName="collect-profiles" Feb 19 22:01:00 crc kubenswrapper[4886]: I0219 22:01:00.175654 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec4940ba-3199-4ea3-be43-d2c9057342cb" containerName="collect-profiles" Feb 19 22:01:00 crc kubenswrapper[4886]: I0219 22:01:00.176674 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525641-dxqhv" Feb 19 22:01:00 crc kubenswrapper[4886]: I0219 22:01:00.193766 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29525641-dxqhv"] Feb 19 22:01:00 crc kubenswrapper[4886]: I0219 22:01:00.201379 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr8xt\" (UniqueName: \"kubernetes.io/projected/26f52df8-9ecb-4d70-9d0a-a5bc247a168e-kube-api-access-fr8xt\") pod \"keystone-cron-29525641-dxqhv\" (UID: \"26f52df8-9ecb-4d70-9d0a-a5bc247a168e\") " pod="openstack/keystone-cron-29525641-dxqhv" Feb 19 22:01:00 crc kubenswrapper[4886]: I0219 22:01:00.201431 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26f52df8-9ecb-4d70-9d0a-a5bc247a168e-config-data\") pod \"keystone-cron-29525641-dxqhv\" (UID: \"26f52df8-9ecb-4d70-9d0a-a5bc247a168e\") " pod="openstack/keystone-cron-29525641-dxqhv" Feb 19 22:01:00 crc kubenswrapper[4886]: I0219 22:01:00.201455 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26f52df8-9ecb-4d70-9d0a-a5bc247a168e-fernet-keys\") pod \"keystone-cron-29525641-dxqhv\" (UID: \"26f52df8-9ecb-4d70-9d0a-a5bc247a168e\") " pod="openstack/keystone-cron-29525641-dxqhv" Feb 19 22:01:00 crc kubenswrapper[4886]: I0219 22:01:00.201961 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f52df8-9ecb-4d70-9d0a-a5bc247a168e-combined-ca-bundle\") pod \"keystone-cron-29525641-dxqhv\" (UID: \"26f52df8-9ecb-4d70-9d0a-a5bc247a168e\") " pod="openstack/keystone-cron-29525641-dxqhv" Feb 19 22:01:00 crc kubenswrapper[4886]: I0219 22:01:00.305392 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr8xt\" (UniqueName: \"kubernetes.io/projected/26f52df8-9ecb-4d70-9d0a-a5bc247a168e-kube-api-access-fr8xt\") pod \"keystone-cron-29525641-dxqhv\" (UID: \"26f52df8-9ecb-4d70-9d0a-a5bc247a168e\") " pod="openstack/keystone-cron-29525641-dxqhv" Feb 19 22:01:00 crc kubenswrapper[4886]: I0219 22:01:00.305727 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26f52df8-9ecb-4d70-9d0a-a5bc247a168e-config-data\") pod \"keystone-cron-29525641-dxqhv\" (UID: \"26f52df8-9ecb-4d70-9d0a-a5bc247a168e\") " pod="openstack/keystone-cron-29525641-dxqhv" Feb 19 22:01:00 crc kubenswrapper[4886]: I0219 22:01:00.305745 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26f52df8-9ecb-4d70-9d0a-a5bc247a168e-fernet-keys\") pod \"keystone-cron-29525641-dxqhv\" (UID: \"26f52df8-9ecb-4d70-9d0a-a5bc247a168e\") " pod="openstack/keystone-cron-29525641-dxqhv" Feb 19 22:01:00 crc kubenswrapper[4886]: I0219 22:01:00.305798 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f52df8-9ecb-4d70-9d0a-a5bc247a168e-combined-ca-bundle\") pod \"keystone-cron-29525641-dxqhv\" (UID: \"26f52df8-9ecb-4d70-9d0a-a5bc247a168e\") " pod="openstack/keystone-cron-29525641-dxqhv" Feb 19 22:01:00 crc kubenswrapper[4886]: I0219 22:01:00.313036 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26f52df8-9ecb-4d70-9d0a-a5bc247a168e-fernet-keys\") pod \"keystone-cron-29525641-dxqhv\" (UID: \"26f52df8-9ecb-4d70-9d0a-a5bc247a168e\") " pod="openstack/keystone-cron-29525641-dxqhv" Feb 19 22:01:00 crc kubenswrapper[4886]: I0219 22:01:00.313216 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26f52df8-9ecb-4d70-9d0a-a5bc247a168e-config-data\") pod \"keystone-cron-29525641-dxqhv\" (UID: \"26f52df8-9ecb-4d70-9d0a-a5bc247a168e\") " pod="openstack/keystone-cron-29525641-dxqhv" Feb 19 22:01:00 crc kubenswrapper[4886]: I0219 22:01:00.334814 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr8xt\" (UniqueName: \"kubernetes.io/projected/26f52df8-9ecb-4d70-9d0a-a5bc247a168e-kube-api-access-fr8xt\") pod \"keystone-cron-29525641-dxqhv\" (UID: \"26f52df8-9ecb-4d70-9d0a-a5bc247a168e\") " pod="openstack/keystone-cron-29525641-dxqhv" Feb 19 22:01:00 crc kubenswrapper[4886]: I0219 22:01:00.339886 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f52df8-9ecb-4d70-9d0a-a5bc247a168e-combined-ca-bundle\") pod \"keystone-cron-29525641-dxqhv\" (UID: \"26f52df8-9ecb-4d70-9d0a-a5bc247a168e\") " pod="openstack/keystone-cron-29525641-dxqhv" Feb 19 22:01:00 crc kubenswrapper[4886]: I0219 22:01:00.505644 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525641-dxqhv" Feb 19 22:01:00 crc kubenswrapper[4886]: I0219 22:01:00.976623 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29525641-dxqhv"] Feb 19 22:01:01 crc kubenswrapper[4886]: I0219 22:01:01.279541 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525641-dxqhv" event={"ID":"26f52df8-9ecb-4d70-9d0a-a5bc247a168e","Type":"ContainerStarted","Data":"a032aa104dc5156806e55f81afdc0daf89ecaa5835a21ca09cf8847b28201b22"} Feb 19 22:01:01 crc kubenswrapper[4886]: I0219 22:01:01.279892 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525641-dxqhv" event={"ID":"26f52df8-9ecb-4d70-9d0a-a5bc247a168e","Type":"ContainerStarted","Data":"78feeb924f0db3d72719bcca4a425b9eebb4289cfd1657f122936a5323bdf10c"} Feb 19 22:01:01 crc kubenswrapper[4886]: I0219 22:01:01.306682 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29525641-dxqhv" podStartSLOduration=1.3066637349999999 podStartE2EDuration="1.306663735s" podCreationTimestamp="2026-02-19 22:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 22:01:01.296611797 +0000 UTC m=+3691.924454857" watchObservedRunningTime="2026-02-19 22:01:01.306663735 +0000 UTC m=+3691.934506775" Feb 19 22:01:01 crc kubenswrapper[4886]: I0219 22:01:01.601881 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 22:01:01 crc kubenswrapper[4886]: E0219 22:01:01.602499 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:01:06 crc kubenswrapper[4886]: I0219 22:01:06.348570 4886 generic.go:334] "Generic (PLEG): container finished" podID="26f52df8-9ecb-4d70-9d0a-a5bc247a168e" containerID="a032aa104dc5156806e55f81afdc0daf89ecaa5835a21ca09cf8847b28201b22" exitCode=0 Feb 19 22:01:06 crc kubenswrapper[4886]: I0219 22:01:06.348686 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525641-dxqhv" event={"ID":"26f52df8-9ecb-4d70-9d0a-a5bc247a168e","Type":"ContainerDied","Data":"a032aa104dc5156806e55f81afdc0daf89ecaa5835a21ca09cf8847b28201b22"} Feb 19 22:01:07 crc kubenswrapper[4886]: I0219 22:01:07.801912 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525641-dxqhv" Feb 19 22:01:07 crc kubenswrapper[4886]: I0219 22:01:07.900966 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr8xt\" (UniqueName: \"kubernetes.io/projected/26f52df8-9ecb-4d70-9d0a-a5bc247a168e-kube-api-access-fr8xt\") pod \"26f52df8-9ecb-4d70-9d0a-a5bc247a168e\" (UID: \"26f52df8-9ecb-4d70-9d0a-a5bc247a168e\") " Feb 19 22:01:07 crc kubenswrapper[4886]: I0219 22:01:07.901124 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26f52df8-9ecb-4d70-9d0a-a5bc247a168e-config-data\") pod \"26f52df8-9ecb-4d70-9d0a-a5bc247a168e\" (UID: \"26f52df8-9ecb-4d70-9d0a-a5bc247a168e\") " Feb 19 22:01:07 crc kubenswrapper[4886]: I0219 22:01:07.901184 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26f52df8-9ecb-4d70-9d0a-a5bc247a168e-fernet-keys\") pod \"26f52df8-9ecb-4d70-9d0a-a5bc247a168e\" (UID: \"26f52df8-9ecb-4d70-9d0a-a5bc247a168e\") " Feb 19 22:01:07 crc kubenswrapper[4886]: I0219 22:01:07.901204 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f52df8-9ecb-4d70-9d0a-a5bc247a168e-combined-ca-bundle\") pod \"26f52df8-9ecb-4d70-9d0a-a5bc247a168e\" (UID: \"26f52df8-9ecb-4d70-9d0a-a5bc247a168e\") " Feb 19 22:01:07 crc kubenswrapper[4886]: I0219 22:01:07.906295 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26f52df8-9ecb-4d70-9d0a-a5bc247a168e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "26f52df8-9ecb-4d70-9d0a-a5bc247a168e" (UID: "26f52df8-9ecb-4d70-9d0a-a5bc247a168e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 22:01:07 crc kubenswrapper[4886]: I0219 22:01:07.906764 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26f52df8-9ecb-4d70-9d0a-a5bc247a168e-kube-api-access-fr8xt" (OuterVolumeSpecName: "kube-api-access-fr8xt") pod "26f52df8-9ecb-4d70-9d0a-a5bc247a168e" (UID: "26f52df8-9ecb-4d70-9d0a-a5bc247a168e"). InnerVolumeSpecName "kube-api-access-fr8xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 22:01:07 crc kubenswrapper[4886]: I0219 22:01:07.948590 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26f52df8-9ecb-4d70-9d0a-a5bc247a168e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26f52df8-9ecb-4d70-9d0a-a5bc247a168e" (UID: "26f52df8-9ecb-4d70-9d0a-a5bc247a168e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 22:01:07 crc kubenswrapper[4886]: I0219 22:01:07.959604 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26f52df8-9ecb-4d70-9d0a-a5bc247a168e-config-data" (OuterVolumeSpecName: "config-data") pod "26f52df8-9ecb-4d70-9d0a-a5bc247a168e" (UID: "26f52df8-9ecb-4d70-9d0a-a5bc247a168e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 22:01:08 crc kubenswrapper[4886]: I0219 22:01:08.004796 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26f52df8-9ecb-4d70-9d0a-a5bc247a168e-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 22:01:08 crc kubenswrapper[4886]: I0219 22:01:08.004839 4886 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26f52df8-9ecb-4d70-9d0a-a5bc247a168e-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 22:01:08 crc kubenswrapper[4886]: I0219 22:01:08.004852 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f52df8-9ecb-4d70-9d0a-a5bc247a168e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 22:01:08 crc kubenswrapper[4886]: I0219 22:01:08.004869 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr8xt\" (UniqueName: \"kubernetes.io/projected/26f52df8-9ecb-4d70-9d0a-a5bc247a168e-kube-api-access-fr8xt\") on node \"crc\" DevicePath \"\"" Feb 19 22:01:08 crc kubenswrapper[4886]: I0219 22:01:08.376733 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525641-dxqhv" event={"ID":"26f52df8-9ecb-4d70-9d0a-a5bc247a168e","Type":"ContainerDied","Data":"78feeb924f0db3d72719bcca4a425b9eebb4289cfd1657f122936a5323bdf10c"} Feb 19 22:01:08 crc kubenswrapper[4886]: I0219 22:01:08.376787 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78feeb924f0db3d72719bcca4a425b9eebb4289cfd1657f122936a5323bdf10c" Feb 19 22:01:08 crc kubenswrapper[4886]: I0219 22:01:08.376840 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525641-dxqhv" Feb 19 22:01:16 crc kubenswrapper[4886]: I0219 22:01:16.602590 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 22:01:16 crc kubenswrapper[4886]: E0219 22:01:16.603690 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:01:28 crc kubenswrapper[4886]: I0219 22:01:28.601566 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 22:01:28 crc kubenswrapper[4886]: E0219 22:01:28.602397 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:01:42 crc kubenswrapper[4886]: I0219 22:01:42.603197 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 22:01:42 crc kubenswrapper[4886]: E0219 22:01:42.605524 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:01:57 crc kubenswrapper[4886]: I0219 22:01:57.602465 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 22:01:57 crc kubenswrapper[4886]: E0219 22:01:57.603464 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:02:03 crc kubenswrapper[4886]: I0219 22:02:03.404941 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b7lwx"] Feb 19 22:02:03 crc kubenswrapper[4886]: E0219 22:02:03.405936 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26f52df8-9ecb-4d70-9d0a-a5bc247a168e" containerName="keystone-cron" Feb 19 22:02:03 crc kubenswrapper[4886]: I0219 22:02:03.405949 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="26f52df8-9ecb-4d70-9d0a-a5bc247a168e" containerName="keystone-cron" Feb 19 22:02:03 crc kubenswrapper[4886]: I0219 22:02:03.406184 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="26f52df8-9ecb-4d70-9d0a-a5bc247a168e" containerName="keystone-cron" Feb 19 22:02:03 crc kubenswrapper[4886]: I0219 22:02:03.407908 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b7lwx" Feb 19 22:02:03 crc kubenswrapper[4886]: I0219 22:02:03.416348 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b7lwx"] Feb 19 22:02:03 crc kubenswrapper[4886]: I0219 22:02:03.562943 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59-utilities\") pod \"community-operators-b7lwx\" (UID: \"ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59\") " pod="openshift-marketplace/community-operators-b7lwx" Feb 19 22:02:03 crc kubenswrapper[4886]: I0219 22:02:03.563030 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6jfd\" (UniqueName: \"kubernetes.io/projected/ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59-kube-api-access-f6jfd\") pod \"community-operators-b7lwx\" (UID: \"ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59\") " pod="openshift-marketplace/community-operators-b7lwx" Feb 19 22:02:03 crc kubenswrapper[4886]: I0219 22:02:03.563228 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59-catalog-content\") pod \"community-operators-b7lwx\" (UID: \"ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59\") " pod="openshift-marketplace/community-operators-b7lwx" Feb 19 22:02:03 crc kubenswrapper[4886]: I0219 22:02:03.666611 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59-catalog-content\") pod \"community-operators-b7lwx\" (UID: \"ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59\") " pod="openshift-marketplace/community-operators-b7lwx" Feb 19 22:02:03 crc kubenswrapper[4886]: I0219 22:02:03.666833 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59-utilities\") pod \"community-operators-b7lwx\" (UID: \"ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59\") " pod="openshift-marketplace/community-operators-b7lwx" Feb 19 22:02:03 crc kubenswrapper[4886]: I0219 22:02:03.666867 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6jfd\" (UniqueName: \"kubernetes.io/projected/ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59-kube-api-access-f6jfd\") pod \"community-operators-b7lwx\" (UID: \"ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59\") " pod="openshift-marketplace/community-operators-b7lwx" Feb 19 22:02:03 crc kubenswrapper[4886]: I0219 22:02:03.667936 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59-catalog-content\") pod \"community-operators-b7lwx\" (UID: \"ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59\") " pod="openshift-marketplace/community-operators-b7lwx" Feb 19 22:02:03 crc kubenswrapper[4886]: I0219 22:02:03.667953 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59-utilities\") pod \"community-operators-b7lwx\" (UID: \"ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59\") " pod="openshift-marketplace/community-operators-b7lwx" Feb 19 22:02:03 crc kubenswrapper[4886]: I0219 22:02:03.705112 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6jfd\" (UniqueName: \"kubernetes.io/projected/ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59-kube-api-access-f6jfd\") pod \"community-operators-b7lwx\" (UID: \"ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59\") " pod="openshift-marketplace/community-operators-b7lwx" Feb 19 22:02:03 crc kubenswrapper[4886]: I0219 22:02:03.736340 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b7lwx" Feb 19 22:02:04 crc kubenswrapper[4886]: I0219 22:02:04.206002 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b7lwx"] Feb 19 22:02:05 crc kubenswrapper[4886]: I0219 22:02:05.035944 4886 generic.go:334] "Generic (PLEG): container finished" podID="ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59" containerID="4a596973b187566f53c6bce471a0a815079e010982e64ac6d3b208261e2bfeec" exitCode=0 Feb 19 22:02:05 crc kubenswrapper[4886]: I0219 22:02:05.036051 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7lwx" event={"ID":"ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59","Type":"ContainerDied","Data":"4a596973b187566f53c6bce471a0a815079e010982e64ac6d3b208261e2bfeec"} Feb 19 22:02:05 crc kubenswrapper[4886]: I0219 22:02:05.036453 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7lwx" event={"ID":"ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59","Type":"ContainerStarted","Data":"40fbf441a088bbdafc6be4c3d4f209b583284a1b7ba437b688b419d8e283c938"} Feb 19 22:02:05 crc kubenswrapper[4886]: I0219 22:02:05.038849 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 22:02:06 crc kubenswrapper[4886]: I0219 22:02:06.048625 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7lwx" event={"ID":"ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59","Type":"ContainerStarted","Data":"a7e953db68105c2d4a6c1fd1b77bb87d0f0a2e8b2c9a73091172ca7f76f555c6"} Feb 19 22:02:08 crc kubenswrapper[4886]: I0219 22:02:08.078103 4886 generic.go:334] "Generic (PLEG): container finished" podID="ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59" containerID="a7e953db68105c2d4a6c1fd1b77bb87d0f0a2e8b2c9a73091172ca7f76f555c6" exitCode=0 Feb 19 22:02:08 crc kubenswrapper[4886]: I0219 22:02:08.078148 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7lwx" event={"ID":"ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59","Type":"ContainerDied","Data":"a7e953db68105c2d4a6c1fd1b77bb87d0f0a2e8b2c9a73091172ca7f76f555c6"} Feb 19 22:02:08 crc kubenswrapper[4886]: I0219 22:02:08.601922 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 22:02:08 crc kubenswrapper[4886]: E0219 22:02:08.602516 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:02:09 crc kubenswrapper[4886]: I0219 22:02:09.090954 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7lwx" event={"ID":"ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59","Type":"ContainerStarted","Data":"5e080f474a5bcd67faf5aab6638dcc287b0d2bdc635c0eaada0376fb88cd2005"} Feb 19 22:02:09 crc kubenswrapper[4886]: I0219 22:02:09.112394 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b7lwx" podStartSLOduration=2.6671196630000003 podStartE2EDuration="6.112378329s" podCreationTimestamp="2026-02-19 22:02:03 +0000 UTC" firstStartedPulling="2026-02-19 22:02:05.038620254 +0000 UTC m=+3755.666463304" lastFinishedPulling="2026-02-19 22:02:08.48387891 +0000 UTC m=+3759.111721970" observedRunningTime="2026-02-19 22:02:09.110104513 +0000 UTC m=+3759.737947553" watchObservedRunningTime="2026-02-19 22:02:09.112378329 +0000 UTC m=+3759.740221379" Feb 19 22:02:10 crc kubenswrapper[4886]: I0219 22:02:10.678093 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dm9n9"] Feb 19 22:02:10 crc kubenswrapper[4886]: I0219 22:02:10.681212 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dm9n9" Feb 19 22:02:10 crc kubenswrapper[4886]: I0219 22:02:10.690023 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dm9n9"] Feb 19 22:02:10 crc kubenswrapper[4886]: I0219 22:02:10.757312 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8366121-277e-408e-a98d-8db27c422fa0-utilities\") pod \"certified-operators-dm9n9\" (UID: \"f8366121-277e-408e-a98d-8db27c422fa0\") " pod="openshift-marketplace/certified-operators-dm9n9" Feb 19 22:02:10 crc kubenswrapper[4886]: I0219 22:02:10.757400 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crrpj\" (UniqueName: \"kubernetes.io/projected/f8366121-277e-408e-a98d-8db27c422fa0-kube-api-access-crrpj\") pod \"certified-operators-dm9n9\" (UID: \"f8366121-277e-408e-a98d-8db27c422fa0\") " pod="openshift-marketplace/certified-operators-dm9n9" Feb 19 22:02:10 crc kubenswrapper[4886]: I0219 22:02:10.757915 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8366121-277e-408e-a98d-8db27c422fa0-catalog-content\") pod \"certified-operators-dm9n9\" (UID: \"f8366121-277e-408e-a98d-8db27c422fa0\") " pod="openshift-marketplace/certified-operators-dm9n9" Feb 19 22:02:10 crc kubenswrapper[4886]: I0219 22:02:10.860331 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8366121-277e-408e-a98d-8db27c422fa0-utilities\") pod \"certified-operators-dm9n9\" (UID: \"f8366121-277e-408e-a98d-8db27c422fa0\") " pod="openshift-marketplace/certified-operators-dm9n9" Feb 19 22:02:10 crc kubenswrapper[4886]: I0219 22:02:10.860394 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crrpj\" (UniqueName: \"kubernetes.io/projected/f8366121-277e-408e-a98d-8db27c422fa0-kube-api-access-crrpj\") pod \"certified-operators-dm9n9\" (UID: \"f8366121-277e-408e-a98d-8db27c422fa0\") " pod="openshift-marketplace/certified-operators-dm9n9" Feb 19 22:02:10 crc kubenswrapper[4886]: I0219 22:02:10.860502 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8366121-277e-408e-a98d-8db27c422fa0-catalog-content\") pod \"certified-operators-dm9n9\" (UID: \"f8366121-277e-408e-a98d-8db27c422fa0\") " pod="openshift-marketplace/certified-operators-dm9n9" Feb 19 22:02:10 crc kubenswrapper[4886]: I0219 22:02:10.861016 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8366121-277e-408e-a98d-8db27c422fa0-utilities\") pod \"certified-operators-dm9n9\" (UID: \"f8366121-277e-408e-a98d-8db27c422fa0\") " pod="openshift-marketplace/certified-operators-dm9n9" Feb 19 22:02:10 crc kubenswrapper[4886]: I0219 22:02:10.861126 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8366121-277e-408e-a98d-8db27c422fa0-catalog-content\") pod \"certified-operators-dm9n9\" (UID: \"f8366121-277e-408e-a98d-8db27c422fa0\") " pod="openshift-marketplace/certified-operators-dm9n9" Feb 19 22:02:10 crc kubenswrapper[4886]: I0219 22:02:10.888249 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crrpj\" (UniqueName: \"kubernetes.io/projected/f8366121-277e-408e-a98d-8db27c422fa0-kube-api-access-crrpj\") pod \"certified-operators-dm9n9\" (UID: \"f8366121-277e-408e-a98d-8db27c422fa0\") " pod="openshift-marketplace/certified-operators-dm9n9" Feb 19 22:02:11 crc kubenswrapper[4886]: I0219 22:02:11.011229 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dm9n9" Feb 19 22:02:11 crc kubenswrapper[4886]: I0219 22:02:11.520942 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dm9n9"] Feb 19 22:02:12 crc kubenswrapper[4886]: I0219 22:02:12.124081 4886 generic.go:334] "Generic (PLEG): container finished" podID="f8366121-277e-408e-a98d-8db27c422fa0" containerID="9ff662485ab6d636f9874e1ebe4251e349474fd541a1a9b03f619c7f1e19b6e3" exitCode=0 Feb 19 22:02:12 crc kubenswrapper[4886]: I0219 22:02:12.124193 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dm9n9" event={"ID":"f8366121-277e-408e-a98d-8db27c422fa0","Type":"ContainerDied","Data":"9ff662485ab6d636f9874e1ebe4251e349474fd541a1a9b03f619c7f1e19b6e3"} Feb 19 22:02:12 crc kubenswrapper[4886]: I0219 22:02:12.124546 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dm9n9" event={"ID":"f8366121-277e-408e-a98d-8db27c422fa0","Type":"ContainerStarted","Data":"6ef14092b1b0938635a483f880d575a773d4d0f2154ef8c567133e35e43edb35"} Feb 19 22:02:13 crc kubenswrapper[4886]: I0219 22:02:13.136413 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dm9n9" event={"ID":"f8366121-277e-408e-a98d-8db27c422fa0","Type":"ContainerStarted","Data":"d7db9a41e3235d2d4e0f980f9c3e7da1dd3e0824dd970b615ae1e38fbe8346de"} Feb 19 22:02:13 crc kubenswrapper[4886]: I0219 22:02:13.736820 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b7lwx" Feb 19 22:02:13 crc kubenswrapper[4886]: I0219 22:02:13.737357 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b7lwx" Feb 19 22:02:13 crc kubenswrapper[4886]: I0219 22:02:13.810298 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b7lwx" Feb 19 22:02:14 crc kubenswrapper[4886]: I0219 22:02:14.238365 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b7lwx" Feb 19 22:02:15 crc kubenswrapper[4886]: I0219 22:02:15.162687 4886 generic.go:334] "Generic (PLEG): container finished" podID="f8366121-277e-408e-a98d-8db27c422fa0" containerID="d7db9a41e3235d2d4e0f980f9c3e7da1dd3e0824dd970b615ae1e38fbe8346de" exitCode=0 Feb 19 22:02:15 crc kubenswrapper[4886]: I0219 22:02:15.162771 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dm9n9" event={"ID":"f8366121-277e-408e-a98d-8db27c422fa0","Type":"ContainerDied","Data":"d7db9a41e3235d2d4e0f980f9c3e7da1dd3e0824dd970b615ae1e38fbe8346de"} Feb 19 22:02:16 crc kubenswrapper[4886]: I0219 22:02:16.057389 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b7lwx"] Feb 19 22:02:16 crc kubenswrapper[4886]: I0219 22:02:16.180417 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b7lwx" podUID="ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59" containerName="registry-server" containerID="cri-o://5e080f474a5bcd67faf5aab6638dcc287b0d2bdc635c0eaada0376fb88cd2005" gracePeriod=2 Feb 19 22:02:16 crc kubenswrapper[4886]: I0219 22:02:16.180564 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dm9n9" event={"ID":"f8366121-277e-408e-a98d-8db27c422fa0","Type":"ContainerStarted","Data":"5eb511a5b4faeb8ae180728647b04ae38ff90cb8ec83dc05b8ef5cbae9a94f80"} Feb 19 22:02:16 crc kubenswrapper[4886]: I0219 22:02:16.214214 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dm9n9" podStartSLOduration=2.671126538 podStartE2EDuration="6.21418903s" podCreationTimestamp="2026-02-19 22:02:10 +0000 UTC" firstStartedPulling="2026-02-19 22:02:12.126910153 +0000 UTC m=+3762.754753223" lastFinishedPulling="2026-02-19 22:02:15.669972645 +0000 UTC m=+3766.297815715" observedRunningTime="2026-02-19 22:02:16.206718926 +0000 UTC m=+3766.834561986" watchObservedRunningTime="2026-02-19 22:02:16.21418903 +0000 UTC m=+3766.842032080" Feb 19 22:02:16 crc kubenswrapper[4886]: I0219 22:02:16.713320 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b7lwx" Feb 19 22:02:16 crc kubenswrapper[4886]: I0219 22:02:16.824968 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59-utilities\") pod \"ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59\" (UID: \"ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59\") " Feb 19 22:02:16 crc kubenswrapper[4886]: I0219 22:02:16.825085 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6jfd\" (UniqueName: \"kubernetes.io/projected/ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59-kube-api-access-f6jfd\") pod \"ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59\" (UID: \"ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59\") " Feb 19 22:02:16 crc kubenswrapper[4886]: I0219 22:02:16.825165 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59-catalog-content\") pod \"ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59\" (UID: \"ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59\") " Feb 19 22:02:16 crc kubenswrapper[4886]: I0219 22:02:16.826221 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59-utilities" (OuterVolumeSpecName: "utilities") pod "ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59" (UID: "ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 22:02:16 crc kubenswrapper[4886]: I0219 22:02:16.826694 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 22:02:16 crc kubenswrapper[4886]: I0219 22:02:16.831884 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59-kube-api-access-f6jfd" (OuterVolumeSpecName: "kube-api-access-f6jfd") pod "ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59" (UID: "ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59"). InnerVolumeSpecName "kube-api-access-f6jfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 22:02:16 crc kubenswrapper[4886]: I0219 22:02:16.885493 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59" (UID: "ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 22:02:16 crc kubenswrapper[4886]: I0219 22:02:16.929544 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6jfd\" (UniqueName: \"kubernetes.io/projected/ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59-kube-api-access-f6jfd\") on node \"crc\" DevicePath \"\"" Feb 19 22:02:16 crc kubenswrapper[4886]: I0219 22:02:16.929596 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 22:02:17 crc kubenswrapper[4886]: I0219 22:02:17.193698 4886 generic.go:334] "Generic (PLEG): container finished" podID="ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59" containerID="5e080f474a5bcd67faf5aab6638dcc287b0d2bdc635c0eaada0376fb88cd2005" exitCode=0 Feb 19 22:02:17 crc kubenswrapper[4886]: I0219 22:02:17.193745 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7lwx" event={"ID":"ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59","Type":"ContainerDied","Data":"5e080f474a5bcd67faf5aab6638dcc287b0d2bdc635c0eaada0376fb88cd2005"} Feb 19 22:02:17 crc kubenswrapper[4886]: I0219 22:02:17.193776 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7lwx" event={"ID":"ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59","Type":"ContainerDied","Data":"40fbf441a088bbdafc6be4c3d4f209b583284a1b7ba437b688b419d8e283c938"} Feb 19 22:02:17 crc kubenswrapper[4886]: I0219 22:02:17.193796 4886 scope.go:117] "RemoveContainer" containerID="5e080f474a5bcd67faf5aab6638dcc287b0d2bdc635c0eaada0376fb88cd2005" Feb 19 22:02:17 crc kubenswrapper[4886]: I0219 22:02:17.193843 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b7lwx" Feb 19 22:02:17 crc kubenswrapper[4886]: I0219 22:02:17.225917 4886 scope.go:117] "RemoveContainer" containerID="a7e953db68105c2d4a6c1fd1b77bb87d0f0a2e8b2c9a73091172ca7f76f555c6" Feb 19 22:02:17 crc kubenswrapper[4886]: I0219 22:02:17.246696 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b7lwx"] Feb 19 22:02:17 crc kubenswrapper[4886]: I0219 22:02:17.250439 4886 scope.go:117] "RemoveContainer" containerID="4a596973b187566f53c6bce471a0a815079e010982e64ac6d3b208261e2bfeec" Feb 19 22:02:17 crc kubenswrapper[4886]: I0219 22:02:17.272115 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b7lwx"] Feb 19 22:02:17 crc kubenswrapper[4886]: I0219 22:02:17.331976 4886 scope.go:117] "RemoveContainer" containerID="5e080f474a5bcd67faf5aab6638dcc287b0d2bdc635c0eaada0376fb88cd2005" Feb 19 22:02:17 crc kubenswrapper[4886]: E0219 22:02:17.332548 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e080f474a5bcd67faf5aab6638dcc287b0d2bdc635c0eaada0376fb88cd2005\": container with ID starting with 5e080f474a5bcd67faf5aab6638dcc287b0d2bdc635c0eaada0376fb88cd2005 not found: ID does not exist" containerID="5e080f474a5bcd67faf5aab6638dcc287b0d2bdc635c0eaada0376fb88cd2005" Feb 19 22:02:17 crc kubenswrapper[4886]: I0219 22:02:17.332597 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e080f474a5bcd67faf5aab6638dcc287b0d2bdc635c0eaada0376fb88cd2005"} err="failed to get container status \"5e080f474a5bcd67faf5aab6638dcc287b0d2bdc635c0eaada0376fb88cd2005\": rpc error: code = NotFound desc = could not find container \"5e080f474a5bcd67faf5aab6638dcc287b0d2bdc635c0eaada0376fb88cd2005\": container with ID starting with 5e080f474a5bcd67faf5aab6638dcc287b0d2bdc635c0eaada0376fb88cd2005 not found: ID does not exist" Feb 19 22:02:17 crc kubenswrapper[4886]: I0219 22:02:17.332627 4886 scope.go:117] "RemoveContainer" containerID="a7e953db68105c2d4a6c1fd1b77bb87d0f0a2e8b2c9a73091172ca7f76f555c6" Feb 19 22:02:17 crc kubenswrapper[4886]: E0219 22:02:17.333179 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7e953db68105c2d4a6c1fd1b77bb87d0f0a2e8b2c9a73091172ca7f76f555c6\": container with ID starting with a7e953db68105c2d4a6c1fd1b77bb87d0f0a2e8b2c9a73091172ca7f76f555c6 not found: ID does not exist" containerID="a7e953db68105c2d4a6c1fd1b77bb87d0f0a2e8b2c9a73091172ca7f76f555c6" Feb 19 22:02:17 crc kubenswrapper[4886]: I0219 22:02:17.333211 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e953db68105c2d4a6c1fd1b77bb87d0f0a2e8b2c9a73091172ca7f76f555c6"} err="failed to get container status \"a7e953db68105c2d4a6c1fd1b77bb87d0f0a2e8b2c9a73091172ca7f76f555c6\": rpc error: code = NotFound desc = could not find container \"a7e953db68105c2d4a6c1fd1b77bb87d0f0a2e8b2c9a73091172ca7f76f555c6\": container with ID starting with a7e953db68105c2d4a6c1fd1b77bb87d0f0a2e8b2c9a73091172ca7f76f555c6 not found: ID does not exist" Feb 19 22:02:17 crc kubenswrapper[4886]: I0219 22:02:17.333240 4886 scope.go:117] "RemoveContainer" containerID="4a596973b187566f53c6bce471a0a815079e010982e64ac6d3b208261e2bfeec" Feb 19 22:02:17 crc kubenswrapper[4886]: E0219 22:02:17.333680 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a596973b187566f53c6bce471a0a815079e010982e64ac6d3b208261e2bfeec\": container with ID starting with 4a596973b187566f53c6bce471a0a815079e010982e64ac6d3b208261e2bfeec not found: ID does not exist" containerID="4a596973b187566f53c6bce471a0a815079e010982e64ac6d3b208261e2bfeec" Feb 19 22:02:17 crc kubenswrapper[4886]: I0219 22:02:17.333703 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a596973b187566f53c6bce471a0a815079e010982e64ac6d3b208261e2bfeec"} err="failed to get container status \"4a596973b187566f53c6bce471a0a815079e010982e64ac6d3b208261e2bfeec\": rpc error: code = NotFound desc = could not find container \"4a596973b187566f53c6bce471a0a815079e010982e64ac6d3b208261e2bfeec\": container with ID starting with 4a596973b187566f53c6bce471a0a815079e010982e64ac6d3b208261e2bfeec not found: ID does not exist" Feb 19 22:02:18 crc kubenswrapper[4886]: I0219 22:02:18.622023 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59" path="/var/lib/kubelet/pods/ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59/volumes" Feb 19 22:02:19 crc kubenswrapper[4886]: I0219 22:02:19.601037 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 22:02:19 crc kubenswrapper[4886]: E0219 22:02:19.601582 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:02:21 crc kubenswrapper[4886]: I0219 22:02:21.012391 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dm9n9" Feb 19 22:02:21 crc kubenswrapper[4886]: I0219 22:02:21.012754 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dm9n9" Feb 19 22:02:22 crc kubenswrapper[4886]: I0219 22:02:22.076819 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-dm9n9" podUID="f8366121-277e-408e-a98d-8db27c422fa0" containerName="registry-server" probeResult="failure" output=< Feb 19 22:02:22 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 22:02:22 crc kubenswrapper[4886]: > Feb 19 22:02:30 crc kubenswrapper[4886]: I0219 22:02:30.617015 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 22:02:30 crc kubenswrapper[4886]: E0219 22:02:30.617974 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:02:31 crc kubenswrapper[4886]: I0219 22:02:31.265830 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dm9n9" Feb 19 22:02:31 crc kubenswrapper[4886]: I0219 22:02:31.335548 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dm9n9" Feb 19 22:02:31 crc kubenswrapper[4886]: I0219 22:02:31.515368 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dm9n9"] Feb 19 22:02:32 crc kubenswrapper[4886]: I0219 22:02:32.394941 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dm9n9" podUID="f8366121-277e-408e-a98d-8db27c422fa0" containerName="registry-server" containerID="cri-o://5eb511a5b4faeb8ae180728647b04ae38ff90cb8ec83dc05b8ef5cbae9a94f80" gracePeriod=2 Feb 19 22:02:32 crc kubenswrapper[4886]: I0219 22:02:32.988706 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dm9n9" Feb 19 22:02:33 crc kubenswrapper[4886]: I0219 22:02:33.063872 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8366121-277e-408e-a98d-8db27c422fa0-utilities\") pod \"f8366121-277e-408e-a98d-8db27c422fa0\" (UID: \"f8366121-277e-408e-a98d-8db27c422fa0\") " Feb 19 22:02:33 crc kubenswrapper[4886]: I0219 22:02:33.063934 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8366121-277e-408e-a98d-8db27c422fa0-catalog-content\") pod \"f8366121-277e-408e-a98d-8db27c422fa0\" (UID: \"f8366121-277e-408e-a98d-8db27c422fa0\") " Feb 19 22:02:33 crc kubenswrapper[4886]: I0219 22:02:33.064066 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crrpj\" (UniqueName: \"kubernetes.io/projected/f8366121-277e-408e-a98d-8db27c422fa0-kube-api-access-crrpj\") pod \"f8366121-277e-408e-a98d-8db27c422fa0\" (UID: \"f8366121-277e-408e-a98d-8db27c422fa0\") " Feb 19 22:02:33 crc kubenswrapper[4886]: I0219 22:02:33.064860 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8366121-277e-408e-a98d-8db27c422fa0-utilities" (OuterVolumeSpecName: "utilities") pod "f8366121-277e-408e-a98d-8db27c422fa0" (UID: "f8366121-277e-408e-a98d-8db27c422fa0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 22:02:33 crc kubenswrapper[4886]: I0219 22:02:33.070352 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8366121-277e-408e-a98d-8db27c422fa0-kube-api-access-crrpj" (OuterVolumeSpecName: "kube-api-access-crrpj") pod "f8366121-277e-408e-a98d-8db27c422fa0" (UID: "f8366121-277e-408e-a98d-8db27c422fa0"). InnerVolumeSpecName "kube-api-access-crrpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 22:02:33 crc kubenswrapper[4886]: I0219 22:02:33.124227 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8366121-277e-408e-a98d-8db27c422fa0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8366121-277e-408e-a98d-8db27c422fa0" (UID: "f8366121-277e-408e-a98d-8db27c422fa0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 22:02:33 crc kubenswrapper[4886]: I0219 22:02:33.166621 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crrpj\" (UniqueName: \"kubernetes.io/projected/f8366121-277e-408e-a98d-8db27c422fa0-kube-api-access-crrpj\") on node \"crc\" DevicePath \"\"" Feb 19 22:02:33 crc kubenswrapper[4886]: I0219 22:02:33.166658 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8366121-277e-408e-a98d-8db27c422fa0-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 22:02:33 crc kubenswrapper[4886]: I0219 22:02:33.166667 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8366121-277e-408e-a98d-8db27c422fa0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 22:02:33 crc kubenswrapper[4886]: I0219 22:02:33.409061 4886 generic.go:334] "Generic (PLEG): container finished" podID="f8366121-277e-408e-a98d-8db27c422fa0" containerID="5eb511a5b4faeb8ae180728647b04ae38ff90cb8ec83dc05b8ef5cbae9a94f80" exitCode=0 Feb 19 22:02:33 crc kubenswrapper[4886]: I0219 22:02:33.409138 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dm9n9" event={"ID":"f8366121-277e-408e-a98d-8db27c422fa0","Type":"ContainerDied","Data":"5eb511a5b4faeb8ae180728647b04ae38ff90cb8ec83dc05b8ef5cbae9a94f80"} Feb 19 22:02:33 crc kubenswrapper[4886]: I0219 22:02:33.409188 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dm9n9" event={"ID":"f8366121-277e-408e-a98d-8db27c422fa0","Type":"ContainerDied","Data":"6ef14092b1b0938635a483f880d575a773d4d0f2154ef8c567133e35e43edb35"} Feb 19 22:02:33 crc kubenswrapper[4886]: I0219 22:02:33.409188 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dm9n9" Feb 19 22:02:33 crc kubenswrapper[4886]: I0219 22:02:33.409223 4886 scope.go:117] "RemoveContainer" containerID="5eb511a5b4faeb8ae180728647b04ae38ff90cb8ec83dc05b8ef5cbae9a94f80" Feb 19 22:02:33 crc kubenswrapper[4886]: I0219 22:02:33.441030 4886 scope.go:117] "RemoveContainer" containerID="d7db9a41e3235d2d4e0f980f9c3e7da1dd3e0824dd970b615ae1e38fbe8346de" Feb 19 22:02:33 crc kubenswrapper[4886]: I0219 22:02:33.478221 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dm9n9"] Feb 19 22:02:33 crc kubenswrapper[4886]: I0219 22:02:33.496284 4886 scope.go:117] "RemoveContainer" containerID="9ff662485ab6d636f9874e1ebe4251e349474fd541a1a9b03f619c7f1e19b6e3" Feb 19 22:02:33 crc kubenswrapper[4886]: I0219 22:02:33.500156 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dm9n9"] Feb 19 22:02:33 crc kubenswrapper[4886]: I0219 22:02:33.572564 4886 scope.go:117] "RemoveContainer" containerID="5eb511a5b4faeb8ae180728647b04ae38ff90cb8ec83dc05b8ef5cbae9a94f80" Feb 19 22:02:33 crc kubenswrapper[4886]: E0219 22:02:33.573098 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eb511a5b4faeb8ae180728647b04ae38ff90cb8ec83dc05b8ef5cbae9a94f80\": container with ID starting with 5eb511a5b4faeb8ae180728647b04ae38ff90cb8ec83dc05b8ef5cbae9a94f80 not found: ID does not exist" containerID="5eb511a5b4faeb8ae180728647b04ae38ff90cb8ec83dc05b8ef5cbae9a94f80" Feb 19 22:02:33 crc kubenswrapper[4886]: I0219 22:02:33.573148 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eb511a5b4faeb8ae180728647b04ae38ff90cb8ec83dc05b8ef5cbae9a94f80"} err="failed to get container status \"5eb511a5b4faeb8ae180728647b04ae38ff90cb8ec83dc05b8ef5cbae9a94f80\": rpc error: code = NotFound desc = could not find container \"5eb511a5b4faeb8ae180728647b04ae38ff90cb8ec83dc05b8ef5cbae9a94f80\": container with ID starting with 5eb511a5b4faeb8ae180728647b04ae38ff90cb8ec83dc05b8ef5cbae9a94f80 not found: ID does not exist" Feb 19 22:02:33 crc kubenswrapper[4886]: I0219 22:02:33.573182 4886 scope.go:117] "RemoveContainer" containerID="d7db9a41e3235d2d4e0f980f9c3e7da1dd3e0824dd970b615ae1e38fbe8346de" Feb 19 22:02:33 crc kubenswrapper[4886]: E0219 22:02:33.574081 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7db9a41e3235d2d4e0f980f9c3e7da1dd3e0824dd970b615ae1e38fbe8346de\": container with ID starting with d7db9a41e3235d2d4e0f980f9c3e7da1dd3e0824dd970b615ae1e38fbe8346de not found: ID does not exist" containerID="d7db9a41e3235d2d4e0f980f9c3e7da1dd3e0824dd970b615ae1e38fbe8346de" Feb 19 22:02:33 crc kubenswrapper[4886]: I0219 22:02:33.574135 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7db9a41e3235d2d4e0f980f9c3e7da1dd3e0824dd970b615ae1e38fbe8346de"} err="failed to get container status \"d7db9a41e3235d2d4e0f980f9c3e7da1dd3e0824dd970b615ae1e38fbe8346de\": rpc error: code = NotFound desc = could not find container \"d7db9a41e3235d2d4e0f980f9c3e7da1dd3e0824dd970b615ae1e38fbe8346de\": container with ID starting with d7db9a41e3235d2d4e0f980f9c3e7da1dd3e0824dd970b615ae1e38fbe8346de not found: ID does not exist" Feb 19 22:02:33 crc kubenswrapper[4886]: I0219 22:02:33.574237 4886 scope.go:117] "RemoveContainer" containerID="9ff662485ab6d636f9874e1ebe4251e349474fd541a1a9b03f619c7f1e19b6e3" Feb 19 22:02:33 crc kubenswrapper[4886]: E0219 22:02:33.574755 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ff662485ab6d636f9874e1ebe4251e349474fd541a1a9b03f619c7f1e19b6e3\": container with ID starting with 9ff662485ab6d636f9874e1ebe4251e349474fd541a1a9b03f619c7f1e19b6e3 not found: ID does not exist" containerID="9ff662485ab6d636f9874e1ebe4251e349474fd541a1a9b03f619c7f1e19b6e3" Feb 19 22:02:33 crc kubenswrapper[4886]: I0219 22:02:33.574799 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ff662485ab6d636f9874e1ebe4251e349474fd541a1a9b03f619c7f1e19b6e3"} err="failed to get container status \"9ff662485ab6d636f9874e1ebe4251e349474fd541a1a9b03f619c7f1e19b6e3\": rpc error: code = NotFound desc = could not find container \"9ff662485ab6d636f9874e1ebe4251e349474fd541a1a9b03f619c7f1e19b6e3\": container with ID starting with 9ff662485ab6d636f9874e1ebe4251e349474fd541a1a9b03f619c7f1e19b6e3 not found: ID does not exist" Feb 19 22:02:34 crc kubenswrapper[4886]: I0219 22:02:34.625347 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8366121-277e-408e-a98d-8db27c422fa0" path="/var/lib/kubelet/pods/f8366121-277e-408e-a98d-8db27c422fa0/volumes" Feb 19 22:02:44 crc kubenswrapper[4886]: I0219 22:02:44.602161 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 22:02:44 crc kubenswrapper[4886]: E0219 22:02:44.602984 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:02:47 crc kubenswrapper[4886]: I0219 22:02:47.963800 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m6fk6"] Feb 19 22:02:47 crc kubenswrapper[4886]: E0219 22:02:47.965318 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8366121-277e-408e-a98d-8db27c422fa0" containerName="extract-content" Feb 19 22:02:47 crc kubenswrapper[4886]: I0219 22:02:47.965341 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8366121-277e-408e-a98d-8db27c422fa0" containerName="extract-content" Feb 19 22:02:47 crc kubenswrapper[4886]: E0219 22:02:47.965363 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59" containerName="extract-utilities" Feb 19 22:02:47 crc kubenswrapper[4886]: I0219 22:02:47.965374 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59" containerName="extract-utilities" Feb 19 22:02:47 crc kubenswrapper[4886]: E0219 22:02:47.965396 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8366121-277e-408e-a98d-8db27c422fa0" containerName="extract-utilities" Feb 19 22:02:47 crc kubenswrapper[4886]: I0219 22:02:47.965407 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8366121-277e-408e-a98d-8db27c422fa0" containerName="extract-utilities" Feb 19 22:02:47 crc kubenswrapper[4886]: E0219 22:02:47.965435 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59" containerName="registry-server" Feb 19 22:02:47 crc kubenswrapper[4886]: I0219 22:02:47.965445 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59" containerName="registry-server" Feb 19 22:02:47 crc kubenswrapper[4886]: E0219 22:02:47.965480 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8366121-277e-408e-a98d-8db27c422fa0" containerName="registry-server" Feb 19 22:02:47 crc kubenswrapper[4886]: I0219 22:02:47.965489 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8366121-277e-408e-a98d-8db27c422fa0" containerName="registry-server" Feb 19 22:02:47 crc kubenswrapper[4886]: E0219 22:02:47.965540 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59" containerName="extract-content" Feb 19 22:02:47 crc kubenswrapper[4886]: I0219 22:02:47.965549 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59" containerName="extract-content" Feb 19 22:02:47 crc kubenswrapper[4886]: I0219 22:02:47.965805 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8366121-277e-408e-a98d-8db27c422fa0" containerName="registry-server" Feb 19 22:02:47 crc kubenswrapper[4886]: I0219 22:02:47.965836 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff8fcdcb-ca8a-4ae1-b417-3cae161b9a59" containerName="registry-server" Feb 19 22:02:47 crc kubenswrapper[4886]: I0219 22:02:47.968571 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m6fk6" Feb 19 22:02:47 crc kubenswrapper[4886]: I0219 22:02:47.986021 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m6fk6"] Feb 19 22:02:48 crc kubenswrapper[4886]: I0219 22:02:48.068098 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34f84776-603b-4b4c-80e4-b05d91227744-utilities\") pod \"redhat-marketplace-m6fk6\" (UID: \"34f84776-603b-4b4c-80e4-b05d91227744\") " pod="openshift-marketplace/redhat-marketplace-m6fk6" Feb 19 22:02:48 crc kubenswrapper[4886]: I0219 22:02:48.068478 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8qdq\" (UniqueName: \"kubernetes.io/projected/34f84776-603b-4b4c-80e4-b05d91227744-kube-api-access-t8qdq\") pod \"redhat-marketplace-m6fk6\" (UID: \"34f84776-603b-4b4c-80e4-b05d91227744\") " pod="openshift-marketplace/redhat-marketplace-m6fk6" Feb 19 22:02:48 crc kubenswrapper[4886]: I0219 22:02:48.068524 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34f84776-603b-4b4c-80e4-b05d91227744-catalog-content\") pod \"redhat-marketplace-m6fk6\" (UID: \"34f84776-603b-4b4c-80e4-b05d91227744\") " pod="openshift-marketplace/redhat-marketplace-m6fk6" Feb 19 22:02:48 crc kubenswrapper[4886]: I0219 22:02:48.170675 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34f84776-603b-4b4c-80e4-b05d91227744-utilities\") pod \"redhat-marketplace-m6fk6\" (UID: \"34f84776-603b-4b4c-80e4-b05d91227744\") " pod="openshift-marketplace/redhat-marketplace-m6fk6" Feb 19 22:02:48 crc kubenswrapper[4886]: I0219 22:02:48.170728 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34f84776-603b-4b4c-80e4-b05d91227744-utilities\") pod \"redhat-marketplace-m6fk6\" (UID: \"34f84776-603b-4b4c-80e4-b05d91227744\") " pod="openshift-marketplace/redhat-marketplace-m6fk6" Feb 19 22:02:48 crc kubenswrapper[4886]: I0219 22:02:48.170845 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8qdq\" (UniqueName: \"kubernetes.io/projected/34f84776-603b-4b4c-80e4-b05d91227744-kube-api-access-t8qdq\") pod \"redhat-marketplace-m6fk6\" (UID: \"34f84776-603b-4b4c-80e4-b05d91227744\") " pod="openshift-marketplace/redhat-marketplace-m6fk6" Feb 19 22:02:48 crc kubenswrapper[4886]: I0219 22:02:48.170866 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34f84776-603b-4b4c-80e4-b05d91227744-catalog-content\") pod \"redhat-marketplace-m6fk6\" (UID: \"34f84776-603b-4b4c-80e4-b05d91227744\") " pod="openshift-marketplace/redhat-marketplace-m6fk6" Feb 19 22:02:48 crc kubenswrapper[4886]: I0219 22:02:48.171591 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34f84776-603b-4b4c-80e4-b05d91227744-catalog-content\") pod \"redhat-marketplace-m6fk6\" (UID: \"34f84776-603b-4b4c-80e4-b05d91227744\") " pod="openshift-marketplace/redhat-marketplace-m6fk6" Feb 19 22:02:48 crc kubenswrapper[4886]: I0219 22:02:48.200517 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8qdq\" (UniqueName: \"kubernetes.io/projected/34f84776-603b-4b4c-80e4-b05d91227744-kube-api-access-t8qdq\") pod \"redhat-marketplace-m6fk6\" (UID: \"34f84776-603b-4b4c-80e4-b05d91227744\") " pod="openshift-marketplace/redhat-marketplace-m6fk6" Feb 19 22:02:48 crc kubenswrapper[4886]: I0219 22:02:48.301228 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m6fk6" Feb 19 22:02:48 crc kubenswrapper[4886]: I0219 22:02:48.790851 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m6fk6"] Feb 19 22:02:49 crc kubenswrapper[4886]: I0219 22:02:49.649190 4886 generic.go:334] "Generic (PLEG): container finished" podID="34f84776-603b-4b4c-80e4-b05d91227744" containerID="3c53bbeb0a628328a1fe7313de82d76a809b9753f5a762196b930d0935e9cfb3" exitCode=0 Feb 19 22:02:49 crc kubenswrapper[4886]: I0219 22:02:49.649297 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m6fk6" event={"ID":"34f84776-603b-4b4c-80e4-b05d91227744","Type":"ContainerDied","Data":"3c53bbeb0a628328a1fe7313de82d76a809b9753f5a762196b930d0935e9cfb3"} Feb 19 22:02:49 crc kubenswrapper[4886]: I0219 22:02:49.650023 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m6fk6" event={"ID":"34f84776-603b-4b4c-80e4-b05d91227744","Type":"ContainerStarted","Data":"ae55b494dbc7d79d9491b2301fcf75dd2643f37fc123216af680cfe7b162be38"} Feb 19 22:02:50 crc kubenswrapper[4886]: I0219 22:02:50.660443 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m6fk6" event={"ID":"34f84776-603b-4b4c-80e4-b05d91227744","Type":"ContainerStarted","Data":"11de5b1c5173392001b2ee34ad42f2c5c203c01cd508cd26086c448e4dbb72f3"} Feb 19 22:02:51 crc kubenswrapper[4886]: I0219 22:02:51.675139 4886 generic.go:334] "Generic (PLEG): container finished" podID="34f84776-603b-4b4c-80e4-b05d91227744" containerID="11de5b1c5173392001b2ee34ad42f2c5c203c01cd508cd26086c448e4dbb72f3" exitCode=0 Feb 19 22:02:51 crc kubenswrapper[4886]: I0219 22:02:51.675180 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m6fk6" event={"ID":"34f84776-603b-4b4c-80e4-b05d91227744","Type":"ContainerDied","Data":"11de5b1c5173392001b2ee34ad42f2c5c203c01cd508cd26086c448e4dbb72f3"} Feb 19 22:02:52 crc kubenswrapper[4886]: I0219 22:02:52.689671 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m6fk6" event={"ID":"34f84776-603b-4b4c-80e4-b05d91227744","Type":"ContainerStarted","Data":"2e66055389f30e44320d25057fc1dce83eb232aa44fcee362ec98bbf2fc56f9f"} Feb 19 22:02:52 crc kubenswrapper[4886]: I0219 22:02:52.713707 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m6fk6" podStartSLOduration=3.291562827 podStartE2EDuration="5.713501667s" podCreationTimestamp="2026-02-19 22:02:47 +0000 UTC" firstStartedPulling="2026-02-19 22:02:49.651539974 +0000 UTC m=+3800.279383034" lastFinishedPulling="2026-02-19 22:02:52.073478824 +0000 UTC m=+3802.701321874" observedRunningTime="2026-02-19 22:02:52.708693398 +0000 UTC m=+3803.336536528" watchObservedRunningTime="2026-02-19 22:02:52.713501667 +0000 UTC m=+3803.341344717" Feb 19 22:02:58 crc kubenswrapper[4886]: I0219 22:02:58.301971 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m6fk6" Feb 19 22:02:58 crc kubenswrapper[4886]: I0219 22:02:58.302547 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m6fk6" Feb 19 22:02:58 crc kubenswrapper[4886]: I0219 22:02:58.357176 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m6fk6" Feb 19 22:02:58 crc kubenswrapper[4886]: I0219 22:02:58.855019 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m6fk6" Feb 19 22:02:58 crc kubenswrapper[4886]: I0219 22:02:58.924333 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m6fk6"] Feb 19 22:02:59 crc kubenswrapper[4886]: I0219 22:02:59.602003 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 22:02:59 crc kubenswrapper[4886]: E0219 22:02:59.602692 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:03:00 crc kubenswrapper[4886]: I0219 22:03:00.779385 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m6fk6" podUID="34f84776-603b-4b4c-80e4-b05d91227744" containerName="registry-server" containerID="cri-o://2e66055389f30e44320d25057fc1dce83eb232aa44fcee362ec98bbf2fc56f9f" gracePeriod=2 Feb 19 22:03:01 crc kubenswrapper[4886]: I0219 22:03:01.332888 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m6fk6" Feb 19 22:03:01 crc kubenswrapper[4886]: I0219 22:03:01.470975 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34f84776-603b-4b4c-80e4-b05d91227744-utilities\") pod \"34f84776-603b-4b4c-80e4-b05d91227744\" (UID: \"34f84776-603b-4b4c-80e4-b05d91227744\") " Feb 19 22:03:01 crc kubenswrapper[4886]: I0219 22:03:01.471195 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8qdq\" (UniqueName: \"kubernetes.io/projected/34f84776-603b-4b4c-80e4-b05d91227744-kube-api-access-t8qdq\") pod \"34f84776-603b-4b4c-80e4-b05d91227744\" (UID: \"34f84776-603b-4b4c-80e4-b05d91227744\") " Feb 19 22:03:01 crc kubenswrapper[4886]: I0219 22:03:01.471238 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34f84776-603b-4b4c-80e4-b05d91227744-catalog-content\") pod \"34f84776-603b-4b4c-80e4-b05d91227744\" (UID: \"34f84776-603b-4b4c-80e4-b05d91227744\") " Feb 19 22:03:01 crc kubenswrapper[4886]: I0219 22:03:01.472645 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34f84776-603b-4b4c-80e4-b05d91227744-utilities" (OuterVolumeSpecName: "utilities") pod "34f84776-603b-4b4c-80e4-b05d91227744" (UID: "34f84776-603b-4b4c-80e4-b05d91227744"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 22:03:01 crc kubenswrapper[4886]: I0219 22:03:01.473626 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34f84776-603b-4b4c-80e4-b05d91227744-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 22:03:01 crc kubenswrapper[4886]: I0219 22:03:01.479260 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34f84776-603b-4b4c-80e4-b05d91227744-kube-api-access-t8qdq" (OuterVolumeSpecName: "kube-api-access-t8qdq") pod "34f84776-603b-4b4c-80e4-b05d91227744" (UID: "34f84776-603b-4b4c-80e4-b05d91227744"). InnerVolumeSpecName "kube-api-access-t8qdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 22:03:01 crc kubenswrapper[4886]: I0219 22:03:01.506911 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34f84776-603b-4b4c-80e4-b05d91227744-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34f84776-603b-4b4c-80e4-b05d91227744" (UID: "34f84776-603b-4b4c-80e4-b05d91227744"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 22:03:01 crc kubenswrapper[4886]: I0219 22:03:01.576337 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8qdq\" (UniqueName: \"kubernetes.io/projected/34f84776-603b-4b4c-80e4-b05d91227744-kube-api-access-t8qdq\") on node \"crc\" DevicePath \"\"" Feb 19 22:03:01 crc kubenswrapper[4886]: I0219 22:03:01.576389 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34f84776-603b-4b4c-80e4-b05d91227744-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 22:03:01 crc kubenswrapper[4886]: I0219 22:03:01.791560 4886 generic.go:334] "Generic (PLEG): container finished" podID="34f84776-603b-4b4c-80e4-b05d91227744" containerID="2e66055389f30e44320d25057fc1dce83eb232aa44fcee362ec98bbf2fc56f9f" exitCode=0 Feb 19 22:03:01 crc kubenswrapper[4886]: I0219 22:03:01.791607 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m6fk6" Feb 19 22:03:01 crc kubenswrapper[4886]: I0219 22:03:01.791652 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m6fk6" event={"ID":"34f84776-603b-4b4c-80e4-b05d91227744","Type":"ContainerDied","Data":"2e66055389f30e44320d25057fc1dce83eb232aa44fcee362ec98bbf2fc56f9f"} Feb 19 22:03:01 crc kubenswrapper[4886]: I0219 22:03:01.791755 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m6fk6" event={"ID":"34f84776-603b-4b4c-80e4-b05d91227744","Type":"ContainerDied","Data":"ae55b494dbc7d79d9491b2301fcf75dd2643f37fc123216af680cfe7b162be38"} Feb 19 22:03:01 crc kubenswrapper[4886]: I0219 22:03:01.791797 4886 scope.go:117] "RemoveContainer" containerID="2e66055389f30e44320d25057fc1dce83eb232aa44fcee362ec98bbf2fc56f9f" Feb 19 22:03:01 crc kubenswrapper[4886]: I0219 22:03:01.828078 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m6fk6"] Feb 19 22:03:01 crc kubenswrapper[4886]: I0219 22:03:01.829778 4886 scope.go:117] "RemoveContainer" containerID="11de5b1c5173392001b2ee34ad42f2c5c203c01cd508cd26086c448e4dbb72f3" Feb 19 22:03:01 crc kubenswrapper[4886]: I0219 22:03:01.841170 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m6fk6"] Feb 19 22:03:01 crc kubenswrapper[4886]: I0219 22:03:01.854361 4886 scope.go:117] "RemoveContainer" containerID="3c53bbeb0a628328a1fe7313de82d76a809b9753f5a762196b930d0935e9cfb3" Feb 19 22:03:01 crc kubenswrapper[4886]: I0219 22:03:01.916455 4886 scope.go:117] "RemoveContainer" containerID="2e66055389f30e44320d25057fc1dce83eb232aa44fcee362ec98bbf2fc56f9f" Feb 19 22:03:01 crc kubenswrapper[4886]: E0219 22:03:01.917210 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e66055389f30e44320d25057fc1dce83eb232aa44fcee362ec98bbf2fc56f9f\": container with ID starting with 2e66055389f30e44320d25057fc1dce83eb232aa44fcee362ec98bbf2fc56f9f not found: ID does not exist" containerID="2e66055389f30e44320d25057fc1dce83eb232aa44fcee362ec98bbf2fc56f9f" Feb 19 22:03:01 crc kubenswrapper[4886]: I0219 22:03:01.917291 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e66055389f30e44320d25057fc1dce83eb232aa44fcee362ec98bbf2fc56f9f"} err="failed to get container status \"2e66055389f30e44320d25057fc1dce83eb232aa44fcee362ec98bbf2fc56f9f\": rpc error: code = NotFound desc = could not find container \"2e66055389f30e44320d25057fc1dce83eb232aa44fcee362ec98bbf2fc56f9f\": container with ID starting with 2e66055389f30e44320d25057fc1dce83eb232aa44fcee362ec98bbf2fc56f9f not found: ID does not exist" Feb 19 22:03:01 crc kubenswrapper[4886]: I0219 22:03:01.917328 4886 scope.go:117] "RemoveContainer" containerID="11de5b1c5173392001b2ee34ad42f2c5c203c01cd508cd26086c448e4dbb72f3" Feb 19 22:03:01 crc kubenswrapper[4886]: E0219 22:03:01.918048 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11de5b1c5173392001b2ee34ad42f2c5c203c01cd508cd26086c448e4dbb72f3\": container with ID starting with 11de5b1c5173392001b2ee34ad42f2c5c203c01cd508cd26086c448e4dbb72f3 not found: ID does not exist" containerID="11de5b1c5173392001b2ee34ad42f2c5c203c01cd508cd26086c448e4dbb72f3" Feb 19 22:03:01 crc kubenswrapper[4886]: I0219 22:03:01.918121 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11de5b1c5173392001b2ee34ad42f2c5c203c01cd508cd26086c448e4dbb72f3"} err="failed to get container status \"11de5b1c5173392001b2ee34ad42f2c5c203c01cd508cd26086c448e4dbb72f3\": rpc error: code = NotFound desc = could not find container \"11de5b1c5173392001b2ee34ad42f2c5c203c01cd508cd26086c448e4dbb72f3\": container with ID starting with 11de5b1c5173392001b2ee34ad42f2c5c203c01cd508cd26086c448e4dbb72f3 not found: ID does not exist" Feb 19 22:03:01 crc kubenswrapper[4886]: I0219 22:03:01.918173 4886 scope.go:117] "RemoveContainer" containerID="3c53bbeb0a628328a1fe7313de82d76a809b9753f5a762196b930d0935e9cfb3" Feb 19 22:03:01 crc kubenswrapper[4886]: E0219 22:03:01.918618 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c53bbeb0a628328a1fe7313de82d76a809b9753f5a762196b930d0935e9cfb3\": container with ID starting with 3c53bbeb0a628328a1fe7313de82d76a809b9753f5a762196b930d0935e9cfb3 not found: ID does not exist" containerID="3c53bbeb0a628328a1fe7313de82d76a809b9753f5a762196b930d0935e9cfb3" Feb 19 22:03:01 crc kubenswrapper[4886]: I0219 22:03:01.918686 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c53bbeb0a628328a1fe7313de82d76a809b9753f5a762196b930d0935e9cfb3"} err="failed to get container status \"3c53bbeb0a628328a1fe7313de82d76a809b9753f5a762196b930d0935e9cfb3\": rpc error: code = NotFound desc = could not find container \"3c53bbeb0a628328a1fe7313de82d76a809b9753f5a762196b930d0935e9cfb3\": container with ID starting with 3c53bbeb0a628328a1fe7313de82d76a809b9753f5a762196b930d0935e9cfb3 not found: ID does not exist" Feb 19 22:03:02 crc kubenswrapper[4886]: I0219 22:03:02.615848 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34f84776-603b-4b4c-80e4-b05d91227744" path="/var/lib/kubelet/pods/34f84776-603b-4b4c-80e4-b05d91227744/volumes" Feb 19 22:03:10 crc kubenswrapper[4886]: I0219 22:03:10.616732 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 22:03:10 crc kubenswrapper[4886]: E0219 22:03:10.617610 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:03:24 crc kubenswrapper[4886]: I0219 22:03:24.601133 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 22:03:24 crc kubenswrapper[4886]: E0219 22:03:24.602155 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:03:35 crc kubenswrapper[4886]: I0219 22:03:35.602414 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 22:03:35 crc kubenswrapper[4886]: E0219 22:03:35.603431 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:03:48 crc kubenswrapper[4886]: I0219 22:03:48.601256 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 22:03:48 crc kubenswrapper[4886]: E0219 22:03:48.601959 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:04:00 crc kubenswrapper[4886]: I0219 22:04:00.611197 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 22:04:00 crc kubenswrapper[4886]: E0219 22:04:00.612549 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:04:13 crc kubenswrapper[4886]: I0219 22:04:13.602020 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 22:04:13 crc kubenswrapper[4886]: E0219 22:04:13.603026 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:04:26 crc kubenswrapper[4886]: I0219 22:04:26.603068 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 22:04:27 crc kubenswrapper[4886]: I0219 22:04:27.787947 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerStarted","Data":"1ad58abb9ab3a8381c8e7916552425cf48f3e9779b23de2d17912e03076a33bc"} Feb 19 22:05:05 crc kubenswrapper[4886]: I0219 22:05:05.143556 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-24xg9"] Feb 19 22:05:05 crc kubenswrapper[4886]: E0219 22:05:05.144642 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34f84776-603b-4b4c-80e4-b05d91227744" containerName="extract-utilities" Feb 19 22:05:05 crc kubenswrapper[4886]: I0219 22:05:05.144657 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="34f84776-603b-4b4c-80e4-b05d91227744" containerName="extract-utilities" Feb 19 22:05:05 crc kubenswrapper[4886]: E0219 22:05:05.144673 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34f84776-603b-4b4c-80e4-b05d91227744" containerName="extract-content" Feb 19 22:05:05 crc kubenswrapper[4886]: I0219 22:05:05.144682 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="34f84776-603b-4b4c-80e4-b05d91227744" containerName="extract-content" Feb 19 22:05:05 crc kubenswrapper[4886]: E0219 22:05:05.144730 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34f84776-603b-4b4c-80e4-b05d91227744" containerName="registry-server" Feb 19 22:05:05 crc kubenswrapper[4886]: I0219 22:05:05.144739 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="34f84776-603b-4b4c-80e4-b05d91227744" containerName="registry-server" Feb 19 22:05:05 crc kubenswrapper[4886]: I0219 22:05:05.144980 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="34f84776-603b-4b4c-80e4-b05d91227744" containerName="registry-server" Feb 19 22:05:05 crc kubenswrapper[4886]: I0219 22:05:05.147666 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-24xg9" Feb 19 22:05:05 crc kubenswrapper[4886]: I0219 22:05:05.169844 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-24xg9"] Feb 19 22:05:05 crc kubenswrapper[4886]: I0219 22:05:05.207153 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80ba93bc-093c-4f50-881f-3c0525669533-catalog-content\") pod \"redhat-operators-24xg9\" (UID: \"80ba93bc-093c-4f50-881f-3c0525669533\") " pod="openshift-marketplace/redhat-operators-24xg9" Feb 19 22:05:05 crc kubenswrapper[4886]: I0219 22:05:05.207381 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80ba93bc-093c-4f50-881f-3c0525669533-utilities\") pod \"redhat-operators-24xg9\" (UID: \"80ba93bc-093c-4f50-881f-3c0525669533\") " pod="openshift-marketplace/redhat-operators-24xg9" Feb 19 22:05:05 crc kubenswrapper[4886]: I0219 22:05:05.207479 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnrjr\" (UniqueName: \"kubernetes.io/projected/80ba93bc-093c-4f50-881f-3c0525669533-kube-api-access-xnrjr\") pod \"redhat-operators-24xg9\" (UID: \"80ba93bc-093c-4f50-881f-3c0525669533\") " pod="openshift-marketplace/redhat-operators-24xg9" Feb 19 22:05:05 crc kubenswrapper[4886]: I0219 22:05:05.309629 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80ba93bc-093c-4f50-881f-3c0525669533-catalog-content\") pod \"redhat-operators-24xg9\" (UID: \"80ba93bc-093c-4f50-881f-3c0525669533\") " pod="openshift-marketplace/redhat-operators-24xg9" Feb 19 22:05:05 crc kubenswrapper[4886]: I0219 22:05:05.309700 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80ba93bc-093c-4f50-881f-3c0525669533-utilities\") pod \"redhat-operators-24xg9\" (UID: \"80ba93bc-093c-4f50-881f-3c0525669533\") " pod="openshift-marketplace/redhat-operators-24xg9" Feb 19 22:05:05 crc kubenswrapper[4886]: I0219 22:05:05.309741 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnrjr\" (UniqueName: \"kubernetes.io/projected/80ba93bc-093c-4f50-881f-3c0525669533-kube-api-access-xnrjr\") pod \"redhat-operators-24xg9\" (UID: \"80ba93bc-093c-4f50-881f-3c0525669533\") " pod="openshift-marketplace/redhat-operators-24xg9" Feb 19 22:05:05 crc kubenswrapper[4886]: I0219 22:05:05.310254 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80ba93bc-093c-4f50-881f-3c0525669533-catalog-content\") pod \"redhat-operators-24xg9\" (UID: \"80ba93bc-093c-4f50-881f-3c0525669533\") " pod="openshift-marketplace/redhat-operators-24xg9" Feb 19 22:05:05 crc kubenswrapper[4886]: I0219 22:05:05.310284 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80ba93bc-093c-4f50-881f-3c0525669533-utilities\") pod \"redhat-operators-24xg9\" (UID: \"80ba93bc-093c-4f50-881f-3c0525669533\") " pod="openshift-marketplace/redhat-operators-24xg9" Feb 19 22:05:05 crc kubenswrapper[4886]: I0219 22:05:05.333675 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnrjr\" (UniqueName: \"kubernetes.io/projected/80ba93bc-093c-4f50-881f-3c0525669533-kube-api-access-xnrjr\") pod \"redhat-operators-24xg9\" (UID: \"80ba93bc-093c-4f50-881f-3c0525669533\") " pod="openshift-marketplace/redhat-operators-24xg9" Feb 19 22:05:05 crc kubenswrapper[4886]: I0219 22:05:05.475038 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-24xg9" Feb 19 22:05:06 crc kubenswrapper[4886]: I0219 22:05:06.013159 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-24xg9"] Feb 19 22:05:06 crc kubenswrapper[4886]: I0219 22:05:06.238997 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24xg9" event={"ID":"80ba93bc-093c-4f50-881f-3c0525669533","Type":"ContainerStarted","Data":"caf17de5bb08c6622e3db02cc7740c3e1ebfbba52c5eddb03d2a6d6f1a3e1f3f"} Feb 19 22:05:07 crc kubenswrapper[4886]: I0219 22:05:07.251805 4886 generic.go:334] "Generic (PLEG): container finished" podID="80ba93bc-093c-4f50-881f-3c0525669533" containerID="c9e9833cf16fef9258337993165e68c3d4d7fec384d82747d1f4abbeb60dfe54" exitCode=0 Feb 19 22:05:07 crc kubenswrapper[4886]: I0219 22:05:07.251984 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24xg9" event={"ID":"80ba93bc-093c-4f50-881f-3c0525669533","Type":"ContainerDied","Data":"c9e9833cf16fef9258337993165e68c3d4d7fec384d82747d1f4abbeb60dfe54"} Feb 19 22:05:09 crc kubenswrapper[4886]: I0219 22:05:09.296782 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24xg9" event={"ID":"80ba93bc-093c-4f50-881f-3c0525669533","Type":"ContainerStarted","Data":"4b6c0786e87aef353ed91621ac2a469dcdc6c243ea930dcaa6f1a370e96499c0"} Feb 19 22:05:14 crc kubenswrapper[4886]: I0219 22:05:14.356666 4886 generic.go:334] "Generic (PLEG): container finished" podID="80ba93bc-093c-4f50-881f-3c0525669533" containerID="4b6c0786e87aef353ed91621ac2a469dcdc6c243ea930dcaa6f1a370e96499c0" exitCode=0 Feb 19 22:05:14 crc kubenswrapper[4886]: I0219 22:05:14.356781 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24xg9" event={"ID":"80ba93bc-093c-4f50-881f-3c0525669533","Type":"ContainerDied","Data":"4b6c0786e87aef353ed91621ac2a469dcdc6c243ea930dcaa6f1a370e96499c0"} Feb 19 22:05:15 crc kubenswrapper[4886]: I0219 22:05:15.368750 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24xg9" event={"ID":"80ba93bc-093c-4f50-881f-3c0525669533","Type":"ContainerStarted","Data":"09a97a01b11d2db025fe995dcd25e84aa9594df50b48242dfdcc686ca8fcbc7b"} Feb 19 22:05:15 crc kubenswrapper[4886]: I0219 22:05:15.399398 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-24xg9" podStartSLOduration=2.897579542 podStartE2EDuration="10.39930412s" podCreationTimestamp="2026-02-19 22:05:05 +0000 UTC" firstStartedPulling="2026-02-19 22:05:07.254780107 +0000 UTC m=+3937.882623157" lastFinishedPulling="2026-02-19 22:05:14.756504675 +0000 UTC m=+3945.384347735" observedRunningTime="2026-02-19 22:05:15.388808871 +0000 UTC m=+3946.016651921" watchObservedRunningTime="2026-02-19 22:05:15.39930412 +0000 UTC m=+3946.027147170" Feb 19 22:05:15 crc kubenswrapper[4886]: I0219 22:05:15.476145 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-24xg9" Feb 19 22:05:15 crc kubenswrapper[4886]: I0219 22:05:15.476433 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-24xg9" Feb 19 22:05:16 crc kubenswrapper[4886]: I0219 22:05:16.526889 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-24xg9" podUID="80ba93bc-093c-4f50-881f-3c0525669533" containerName="registry-server" probeResult="failure" output=< Feb 19 22:05:16 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 22:05:16 crc kubenswrapper[4886]: > Feb 19 22:05:26 crc kubenswrapper[4886]: I0219 22:05:26.536259 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-24xg9" podUID="80ba93bc-093c-4f50-881f-3c0525669533" containerName="registry-server" probeResult="failure" output=< Feb 19 22:05:26 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 22:05:26 crc kubenswrapper[4886]: > Feb 19 22:05:35 crc kubenswrapper[4886]: I0219 22:05:35.525313 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-24xg9" Feb 19 22:05:35 crc kubenswrapper[4886]: I0219 22:05:35.583944 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-24xg9" Feb 19 22:05:36 crc kubenswrapper[4886]: I0219 22:05:36.343891 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-24xg9"] Feb 19 22:05:36 crc kubenswrapper[4886]: I0219 22:05:36.608440 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-24xg9" podUID="80ba93bc-093c-4f50-881f-3c0525669533" containerName="registry-server" containerID="cri-o://09a97a01b11d2db025fe995dcd25e84aa9594df50b48242dfdcc686ca8fcbc7b" gracePeriod=2 Feb 19 22:05:37 crc kubenswrapper[4886]: I0219 22:05:37.236167 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-24xg9" Feb 19 22:05:37 crc kubenswrapper[4886]: I0219 22:05:37.316720 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80ba93bc-093c-4f50-881f-3c0525669533-catalog-content\") pod \"80ba93bc-093c-4f50-881f-3c0525669533\" (UID: \"80ba93bc-093c-4f50-881f-3c0525669533\") " Feb 19 22:05:37 crc kubenswrapper[4886]: I0219 22:05:37.316863 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80ba93bc-093c-4f50-881f-3c0525669533-utilities\") pod \"80ba93bc-093c-4f50-881f-3c0525669533\" (UID: \"80ba93bc-093c-4f50-881f-3c0525669533\") " Feb 19 22:05:37 crc kubenswrapper[4886]: I0219 22:05:37.317077 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnrjr\" (UniqueName: \"kubernetes.io/projected/80ba93bc-093c-4f50-881f-3c0525669533-kube-api-access-xnrjr\") pod \"80ba93bc-093c-4f50-881f-3c0525669533\" (UID: \"80ba93bc-093c-4f50-881f-3c0525669533\") " Feb 19 22:05:37 crc kubenswrapper[4886]: I0219 22:05:37.317656 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80ba93bc-093c-4f50-881f-3c0525669533-utilities" (OuterVolumeSpecName: "utilities") pod "80ba93bc-093c-4f50-881f-3c0525669533" (UID: "80ba93bc-093c-4f50-881f-3c0525669533"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 22:05:37 crc kubenswrapper[4886]: I0219 22:05:37.318221 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80ba93bc-093c-4f50-881f-3c0525669533-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 22:05:37 crc kubenswrapper[4886]: I0219 22:05:37.324915 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80ba93bc-093c-4f50-881f-3c0525669533-kube-api-access-xnrjr" (OuterVolumeSpecName: "kube-api-access-xnrjr") pod "80ba93bc-093c-4f50-881f-3c0525669533" (UID: "80ba93bc-093c-4f50-881f-3c0525669533"). InnerVolumeSpecName "kube-api-access-xnrjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 22:05:37 crc kubenswrapper[4886]: I0219 22:05:37.426930 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnrjr\" (UniqueName: \"kubernetes.io/projected/80ba93bc-093c-4f50-881f-3c0525669533-kube-api-access-xnrjr\") on node \"crc\" DevicePath \"\"" Feb 19 22:05:37 crc kubenswrapper[4886]: I0219 22:05:37.446318 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80ba93bc-093c-4f50-881f-3c0525669533-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80ba93bc-093c-4f50-881f-3c0525669533" (UID: "80ba93bc-093c-4f50-881f-3c0525669533"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 22:05:37 crc kubenswrapper[4886]: I0219 22:05:37.529700 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80ba93bc-093c-4f50-881f-3c0525669533-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 22:05:37 crc kubenswrapper[4886]: I0219 22:05:37.623236 4886 generic.go:334] "Generic (PLEG): container finished" podID="80ba93bc-093c-4f50-881f-3c0525669533" containerID="09a97a01b11d2db025fe995dcd25e84aa9594df50b48242dfdcc686ca8fcbc7b" exitCode=0 Feb 19 22:05:37 crc kubenswrapper[4886]: I0219 22:05:37.623277 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-24xg9" Feb 19 22:05:37 crc kubenswrapper[4886]: I0219 22:05:37.623304 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24xg9" event={"ID":"80ba93bc-093c-4f50-881f-3c0525669533","Type":"ContainerDied","Data":"09a97a01b11d2db025fe995dcd25e84aa9594df50b48242dfdcc686ca8fcbc7b"} Feb 19 22:05:37 crc kubenswrapper[4886]: I0219 22:05:37.624489 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24xg9" event={"ID":"80ba93bc-093c-4f50-881f-3c0525669533","Type":"ContainerDied","Data":"caf17de5bb08c6622e3db02cc7740c3e1ebfbba52c5eddb03d2a6d6f1a3e1f3f"} Feb 19 22:05:37 crc kubenswrapper[4886]: I0219 22:05:37.624578 4886 scope.go:117] "RemoveContainer" containerID="09a97a01b11d2db025fe995dcd25e84aa9594df50b48242dfdcc686ca8fcbc7b" Feb 19 22:05:37 crc kubenswrapper[4886]: I0219 22:05:37.656873 4886 scope.go:117] "RemoveContainer" containerID="4b6c0786e87aef353ed91621ac2a469dcdc6c243ea930dcaa6f1a370e96499c0" Feb 19 22:05:37 crc kubenswrapper[4886]: I0219 22:05:37.679626 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-24xg9"] Feb 19 22:05:37 crc kubenswrapper[4886]: I0219 22:05:37.692834 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-24xg9"] Feb 19 22:05:37 crc kubenswrapper[4886]: I0219 22:05:37.706185 4886 scope.go:117] "RemoveContainer" containerID="c9e9833cf16fef9258337993165e68c3d4d7fec384d82747d1f4abbeb60dfe54" Feb 19 22:05:37 crc kubenswrapper[4886]: I0219 22:05:37.753601 4886 scope.go:117] "RemoveContainer" containerID="09a97a01b11d2db025fe995dcd25e84aa9594df50b48242dfdcc686ca8fcbc7b" Feb 19 22:05:37 crc kubenswrapper[4886]: E0219 22:05:37.754211 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09a97a01b11d2db025fe995dcd25e84aa9594df50b48242dfdcc686ca8fcbc7b\": container with ID starting with 09a97a01b11d2db025fe995dcd25e84aa9594df50b48242dfdcc686ca8fcbc7b not found: ID does not exist" containerID="09a97a01b11d2db025fe995dcd25e84aa9594df50b48242dfdcc686ca8fcbc7b" Feb 19 22:05:37 crc kubenswrapper[4886]: I0219 22:05:37.754330 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09a97a01b11d2db025fe995dcd25e84aa9594df50b48242dfdcc686ca8fcbc7b"} err="failed to get container status \"09a97a01b11d2db025fe995dcd25e84aa9594df50b48242dfdcc686ca8fcbc7b\": rpc error: code = NotFound desc = could not find container \"09a97a01b11d2db025fe995dcd25e84aa9594df50b48242dfdcc686ca8fcbc7b\": container with ID starting with 09a97a01b11d2db025fe995dcd25e84aa9594df50b48242dfdcc686ca8fcbc7b not found: ID does not exist" Feb 19 22:05:37 crc kubenswrapper[4886]: I0219 22:05:37.754370 4886 scope.go:117] "RemoveContainer" containerID="4b6c0786e87aef353ed91621ac2a469dcdc6c243ea930dcaa6f1a370e96499c0" Feb 19 22:05:37 crc kubenswrapper[4886]: E0219 22:05:37.754955 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b6c0786e87aef353ed91621ac2a469dcdc6c243ea930dcaa6f1a370e96499c0\": container with ID starting with 4b6c0786e87aef353ed91621ac2a469dcdc6c243ea930dcaa6f1a370e96499c0 not found: ID does not exist" containerID="4b6c0786e87aef353ed91621ac2a469dcdc6c243ea930dcaa6f1a370e96499c0" Feb 19 22:05:37 crc kubenswrapper[4886]: I0219 22:05:37.755015 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b6c0786e87aef353ed91621ac2a469dcdc6c243ea930dcaa6f1a370e96499c0"} err="failed to get container status \"4b6c0786e87aef353ed91621ac2a469dcdc6c243ea930dcaa6f1a370e96499c0\": rpc error: code = NotFound desc = could not find container \"4b6c0786e87aef353ed91621ac2a469dcdc6c243ea930dcaa6f1a370e96499c0\": container with ID starting with 4b6c0786e87aef353ed91621ac2a469dcdc6c243ea930dcaa6f1a370e96499c0 not found: ID does not exist" Feb 19 22:05:37 crc kubenswrapper[4886]: I0219 22:05:37.755041 4886 scope.go:117] "RemoveContainer" containerID="c9e9833cf16fef9258337993165e68c3d4d7fec384d82747d1f4abbeb60dfe54" Feb 19 22:05:37 crc kubenswrapper[4886]: E0219 22:05:37.755436 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9e9833cf16fef9258337993165e68c3d4d7fec384d82747d1f4abbeb60dfe54\": container with ID starting with c9e9833cf16fef9258337993165e68c3d4d7fec384d82747d1f4abbeb60dfe54 not found: ID does not exist" containerID="c9e9833cf16fef9258337993165e68c3d4d7fec384d82747d1f4abbeb60dfe54" Feb 19 22:05:37 crc kubenswrapper[4886]: I0219 22:05:37.755481 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9e9833cf16fef9258337993165e68c3d4d7fec384d82747d1f4abbeb60dfe54"} err="failed to get container status \"c9e9833cf16fef9258337993165e68c3d4d7fec384d82747d1f4abbeb60dfe54\": rpc error: code = NotFound desc = could not find container \"c9e9833cf16fef9258337993165e68c3d4d7fec384d82747d1f4abbeb60dfe54\": container with ID starting with c9e9833cf16fef9258337993165e68c3d4d7fec384d82747d1f4abbeb60dfe54 not found: ID does not exist" Feb 19 22:05:38 crc kubenswrapper[4886]: I0219 22:05:38.613349 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80ba93bc-093c-4f50-881f-3c0525669533" path="/var/lib/kubelet/pods/80ba93bc-093c-4f50-881f-3c0525669533/volumes" Feb 19 22:06:48 crc kubenswrapper[4886]: I0219 22:06:48.324771 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 22:06:48 crc kubenswrapper[4886]: I0219 22:06:48.325260 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 22:07:18 crc kubenswrapper[4886]: I0219 22:07:18.325320 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 22:07:18 crc kubenswrapper[4886]: I0219 22:07:18.326175 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 22:07:48 crc kubenswrapper[4886]: I0219 22:07:48.324745 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 22:07:48 crc kubenswrapper[4886]: I0219 22:07:48.325383 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 22:07:48 crc kubenswrapper[4886]: I0219 22:07:48.325434 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 22:07:48 crc kubenswrapper[4886]: I0219 22:07:48.325979 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1ad58abb9ab3a8381c8e7916552425cf48f3e9779b23de2d17912e03076a33bc"} pod="openshift-machine-config-operator/machine-config-daemon-6stm5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 22:07:48 crc kubenswrapper[4886]: I0219 22:07:48.326030 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" containerID="cri-o://1ad58abb9ab3a8381c8e7916552425cf48f3e9779b23de2d17912e03076a33bc" gracePeriod=600 Feb 19 22:07:49 crc kubenswrapper[4886]: I0219 22:07:49.269683 4886 generic.go:334] "Generic (PLEG): container finished" podID="b096c32d-4192-4529-bc55-b05d09004007" containerID="1ad58abb9ab3a8381c8e7916552425cf48f3e9779b23de2d17912e03076a33bc" exitCode=0 Feb 19 22:07:49 crc kubenswrapper[4886]: I0219 22:07:49.269754 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerDied","Data":"1ad58abb9ab3a8381c8e7916552425cf48f3e9779b23de2d17912e03076a33bc"} Feb 19 22:07:49 crc kubenswrapper[4886]: I0219 22:07:49.270027 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerStarted","Data":"7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5"} Feb 19 22:07:49 crc kubenswrapper[4886]: I0219 22:07:49.270045 4886 scope.go:117] "RemoveContainer" containerID="71276d61fc4e82fcad391827853d0e3cdd4a717667395fb444e3f77effceb48b" Feb 19 22:09:48 crc kubenswrapper[4886]: I0219 22:09:48.324663 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 22:09:48 crc kubenswrapper[4886]: I0219 22:09:48.325162 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 22:10:03 crc kubenswrapper[4886]: E0219 22:10:03.559621 4886 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.30:34466->38.102.83.30:40589: write tcp 38.102.83.30:34466->38.102.83.30:40589: write: broken pipe Feb 19 22:10:18 crc kubenswrapper[4886]: I0219 22:10:18.324794 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 22:10:18 crc kubenswrapper[4886]: I0219 22:10:18.325473 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 22:10:48 crc kubenswrapper[4886]: I0219 22:10:48.324724 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 22:10:48 crc kubenswrapper[4886]: I0219 22:10:48.325980 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 22:10:48 crc kubenswrapper[4886]: I0219 22:10:48.326074 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 22:10:48 crc kubenswrapper[4886]: I0219 22:10:48.327329 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5"} pod="openshift-machine-config-operator/machine-config-daemon-6stm5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 22:10:48 crc kubenswrapper[4886]: I0219 22:10:48.327410 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" containerID="cri-o://7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" gracePeriod=600 Feb 19 22:10:48 crc kubenswrapper[4886]: E0219 22:10:48.449547 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:10:49 crc kubenswrapper[4886]: I0219 22:10:49.394895 4886 generic.go:334] "Generic (PLEG): container finished" podID="b096c32d-4192-4529-bc55-b05d09004007" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" exitCode=0 Feb 19 22:10:49 crc kubenswrapper[4886]: I0219 22:10:49.394967 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerDied","Data":"7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5"} Feb 19 22:10:49 crc kubenswrapper[4886]: I0219 22:10:49.395684 4886 scope.go:117] "RemoveContainer" containerID="1ad58abb9ab3a8381c8e7916552425cf48f3e9779b23de2d17912e03076a33bc" Feb 19 22:10:49 crc kubenswrapper[4886]: I0219 22:10:49.396480 4886 scope.go:117] "RemoveContainer" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" Feb 19 22:10:49 crc kubenswrapper[4886]: E0219 22:10:49.396893 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:11:04 crc kubenswrapper[4886]: I0219 22:11:04.601948 4886 scope.go:117] "RemoveContainer" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" Feb 19 22:11:04 crc kubenswrapper[4886]: E0219 22:11:04.602868 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:11:17 crc kubenswrapper[4886]: I0219 22:11:17.601989 4886 scope.go:117] "RemoveContainer" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" Feb 19 22:11:17 crc kubenswrapper[4886]: E0219 22:11:17.603744 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:11:30 crc kubenswrapper[4886]: I0219 22:11:30.601041 4886 scope.go:117] "RemoveContainer" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" Feb 19 22:11:30 crc kubenswrapper[4886]: E0219 22:11:30.601825 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:11:41 crc kubenswrapper[4886]: I0219 22:11:41.602846 4886 scope.go:117] "RemoveContainer" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" Feb 19 22:11:41 crc kubenswrapper[4886]: E0219 22:11:41.605492 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:11:55 crc kubenswrapper[4886]: I0219 22:11:55.601723 4886 scope.go:117] "RemoveContainer" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" Feb 19 22:11:55 crc kubenswrapper[4886]: E0219 22:11:55.602513 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:12:10 crc kubenswrapper[4886]: I0219 22:12:10.612543 4886 scope.go:117] "RemoveContainer" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" Feb 19 22:12:10 crc kubenswrapper[4886]: E0219 22:12:10.613387 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:12:21 crc kubenswrapper[4886]: I0219 22:12:21.602128 4886 scope.go:117] "RemoveContainer" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" Feb 19 22:12:21 crc kubenswrapper[4886]: E0219 22:12:21.603103 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:12:33 crc kubenswrapper[4886]: I0219 22:12:33.631234 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ctdkq"] Feb 19 22:12:33 crc kubenswrapper[4886]: E0219 22:12:33.632885 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80ba93bc-093c-4f50-881f-3c0525669533" containerName="registry-server" Feb 19 22:12:33 crc kubenswrapper[4886]: I0219 22:12:33.632906 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="80ba93bc-093c-4f50-881f-3c0525669533" containerName="registry-server" Feb 19 22:12:33 crc kubenswrapper[4886]: E0219 22:12:33.632950 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80ba93bc-093c-4f50-881f-3c0525669533" containerName="extract-content" Feb 19 22:12:33 crc kubenswrapper[4886]: I0219 22:12:33.632958 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="80ba93bc-093c-4f50-881f-3c0525669533" containerName="extract-content" Feb 19 22:12:33 crc kubenswrapper[4886]: E0219 22:12:33.633020 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80ba93bc-093c-4f50-881f-3c0525669533" containerName="extract-utilities" Feb 19 22:12:33 crc kubenswrapper[4886]: I0219 22:12:33.633029 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="80ba93bc-093c-4f50-881f-3c0525669533" containerName="extract-utilities" Feb 19 22:12:33 crc kubenswrapper[4886]: I0219 22:12:33.633362 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="80ba93bc-093c-4f50-881f-3c0525669533" containerName="registry-server" Feb 19 22:12:33 crc kubenswrapper[4886]: I0219 22:12:33.635550 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ctdkq" Feb 19 22:12:33 crc kubenswrapper[4886]: I0219 22:12:33.648972 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ctdkq"] Feb 19 22:12:33 crc kubenswrapper[4886]: I0219 22:12:33.835131 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b3f314-6188-4c12-b687-82f5aea9bfd6-utilities\") pod \"community-operators-ctdkq\" (UID: \"d6b3f314-6188-4c12-b687-82f5aea9bfd6\") " pod="openshift-marketplace/community-operators-ctdkq" Feb 19 22:12:33 crc kubenswrapper[4886]: I0219 22:12:33.836212 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d5gh\" (UniqueName: \"kubernetes.io/projected/d6b3f314-6188-4c12-b687-82f5aea9bfd6-kube-api-access-7d5gh\") pod \"community-operators-ctdkq\" (UID: \"d6b3f314-6188-4c12-b687-82f5aea9bfd6\") " pod="openshift-marketplace/community-operators-ctdkq" Feb 19 22:12:33 crc kubenswrapper[4886]: I0219 22:12:33.836292 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b3f314-6188-4c12-b687-82f5aea9bfd6-catalog-content\") pod \"community-operators-ctdkq\" (UID: \"d6b3f314-6188-4c12-b687-82f5aea9bfd6\") " pod="openshift-marketplace/community-operators-ctdkq" Feb 19 22:12:33 crc kubenswrapper[4886]: I0219 22:12:33.939390 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d5gh\" (UniqueName: \"kubernetes.io/projected/d6b3f314-6188-4c12-b687-82f5aea9bfd6-kube-api-access-7d5gh\") pod \"community-operators-ctdkq\" (UID: \"d6b3f314-6188-4c12-b687-82f5aea9bfd6\") " pod="openshift-marketplace/community-operators-ctdkq" Feb 19 22:12:33 crc kubenswrapper[4886]: I0219 22:12:33.939439 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b3f314-6188-4c12-b687-82f5aea9bfd6-catalog-content\") pod \"community-operators-ctdkq\" (UID: \"d6b3f314-6188-4c12-b687-82f5aea9bfd6\") " pod="openshift-marketplace/community-operators-ctdkq" Feb 19 22:12:33 crc kubenswrapper[4886]: I0219 22:12:33.939623 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b3f314-6188-4c12-b687-82f5aea9bfd6-utilities\") pod \"community-operators-ctdkq\" (UID: \"d6b3f314-6188-4c12-b687-82f5aea9bfd6\") " pod="openshift-marketplace/community-operators-ctdkq" Feb 19 22:12:33 crc kubenswrapper[4886]: I0219 22:12:33.940318 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b3f314-6188-4c12-b687-82f5aea9bfd6-catalog-content\") pod \"community-operators-ctdkq\" (UID: \"d6b3f314-6188-4c12-b687-82f5aea9bfd6\") " pod="openshift-marketplace/community-operators-ctdkq" Feb 19 22:12:33 crc kubenswrapper[4886]: I0219 22:12:33.940390 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b3f314-6188-4c12-b687-82f5aea9bfd6-utilities\") pod \"community-operators-ctdkq\" (UID: \"d6b3f314-6188-4c12-b687-82f5aea9bfd6\") " pod="openshift-marketplace/community-operators-ctdkq" Feb 19 22:12:33 crc kubenswrapper[4886]: I0219 22:12:33.967886 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d5gh\" (UniqueName: \"kubernetes.io/projected/d6b3f314-6188-4c12-b687-82f5aea9bfd6-kube-api-access-7d5gh\") pod \"community-operators-ctdkq\" (UID: \"d6b3f314-6188-4c12-b687-82f5aea9bfd6\") " pod="openshift-marketplace/community-operators-ctdkq" Feb 19 22:12:34 crc kubenswrapper[4886]: I0219 22:12:34.267880 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ctdkq" Feb 19 22:12:34 crc kubenswrapper[4886]: I0219 22:12:34.603828 4886 scope.go:117] "RemoveContainer" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" Feb 19 22:12:34 crc kubenswrapper[4886]: E0219 22:12:34.604481 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:12:34 crc kubenswrapper[4886]: I0219 22:12:34.852959 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ctdkq"] Feb 19 22:12:35 crc kubenswrapper[4886]: I0219 22:12:35.429375 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5rdmz"] Feb 19 22:12:35 crc kubenswrapper[4886]: I0219 22:12:35.432698 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5rdmz" Feb 19 22:12:35 crc kubenswrapper[4886]: I0219 22:12:35.442740 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5rdmz"] Feb 19 22:12:35 crc kubenswrapper[4886]: I0219 22:12:35.582651 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvh7w\" (UniqueName: \"kubernetes.io/projected/787d1be6-bb17-4368-a423-42aca50eb005-kube-api-access-pvh7w\") pod \"certified-operators-5rdmz\" (UID: \"787d1be6-bb17-4368-a423-42aca50eb005\") " pod="openshift-marketplace/certified-operators-5rdmz" Feb 19 22:12:35 crc kubenswrapper[4886]: I0219 22:12:35.582710 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/787d1be6-bb17-4368-a423-42aca50eb005-utilities\") pod \"certified-operators-5rdmz\" (UID: \"787d1be6-bb17-4368-a423-42aca50eb005\") " pod="openshift-marketplace/certified-operators-5rdmz" Feb 19 22:12:35 crc kubenswrapper[4886]: I0219 22:12:35.582777 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/787d1be6-bb17-4368-a423-42aca50eb005-catalog-content\") pod \"certified-operators-5rdmz\" (UID: \"787d1be6-bb17-4368-a423-42aca50eb005\") " pod="openshift-marketplace/certified-operators-5rdmz" Feb 19 22:12:35 crc kubenswrapper[4886]: I0219 22:12:35.685853 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/787d1be6-bb17-4368-a423-42aca50eb005-catalog-content\") pod \"certified-operators-5rdmz\" (UID: \"787d1be6-bb17-4368-a423-42aca50eb005\") " pod="openshift-marketplace/certified-operators-5rdmz" Feb 19 22:12:35 crc kubenswrapper[4886]: I0219 22:12:35.686072 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvh7w\" (UniqueName: \"kubernetes.io/projected/787d1be6-bb17-4368-a423-42aca50eb005-kube-api-access-pvh7w\") pod \"certified-operators-5rdmz\" (UID: \"787d1be6-bb17-4368-a423-42aca50eb005\") " pod="openshift-marketplace/certified-operators-5rdmz" Feb 19 22:12:35 crc kubenswrapper[4886]: I0219 22:12:35.686100 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/787d1be6-bb17-4368-a423-42aca50eb005-utilities\") pod \"certified-operators-5rdmz\" (UID: \"787d1be6-bb17-4368-a423-42aca50eb005\") " pod="openshift-marketplace/certified-operators-5rdmz" Feb 19 22:12:35 crc kubenswrapper[4886]: I0219 22:12:35.686586 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/787d1be6-bb17-4368-a423-42aca50eb005-catalog-content\") pod \"certified-operators-5rdmz\" (UID: \"787d1be6-bb17-4368-a423-42aca50eb005\") " pod="openshift-marketplace/certified-operators-5rdmz" Feb 19 22:12:35 crc kubenswrapper[4886]: I0219 22:12:35.686599 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/787d1be6-bb17-4368-a423-42aca50eb005-utilities\") pod \"certified-operators-5rdmz\" (UID: \"787d1be6-bb17-4368-a423-42aca50eb005\") " pod="openshift-marketplace/certified-operators-5rdmz" Feb 19 22:12:35 crc kubenswrapper[4886]: I0219 22:12:35.766227 4886 generic.go:334] "Generic (PLEG): container finished" podID="d6b3f314-6188-4c12-b687-82f5aea9bfd6" containerID="a3b364e9fdf8aaca7b5748ee55621a80c823eb267d9d009a24c1815c64117ebb" exitCode=0 Feb 19 22:12:35 crc kubenswrapper[4886]: I0219 22:12:35.766296 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctdkq" event={"ID":"d6b3f314-6188-4c12-b687-82f5aea9bfd6","Type":"ContainerDied","Data":"a3b364e9fdf8aaca7b5748ee55621a80c823eb267d9d009a24c1815c64117ebb"} Feb 19 22:12:35 crc kubenswrapper[4886]: I0219 22:12:35.766345 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctdkq" event={"ID":"d6b3f314-6188-4c12-b687-82f5aea9bfd6","Type":"ContainerStarted","Data":"b9b1c379d5f218c6915276cc1a60a32d4b4175aae837357ec7bc6c06d661d0b4"} Feb 19 22:12:35 crc kubenswrapper[4886]: I0219 22:12:35.768485 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 22:12:36 crc kubenswrapper[4886]: I0219 22:12:36.094799 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvh7w\" (UniqueName: \"kubernetes.io/projected/787d1be6-bb17-4368-a423-42aca50eb005-kube-api-access-pvh7w\") pod \"certified-operators-5rdmz\" (UID: \"787d1be6-bb17-4368-a423-42aca50eb005\") " pod="openshift-marketplace/certified-operators-5rdmz" Feb 19 22:12:36 crc kubenswrapper[4886]: I0219 22:12:36.355992 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5rdmz" Feb 19 22:12:36 crc kubenswrapper[4886]: W0219 22:12:36.872598 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod787d1be6_bb17_4368_a423_42aca50eb005.slice/crio-6b072d2771d39f03485cf206df5e0ab05ab76b17aa78abef449aa901c5679b30 WatchSource:0}: Error finding container 6b072d2771d39f03485cf206df5e0ab05ab76b17aa78abef449aa901c5679b30: Status 404 returned error can't find the container with id 6b072d2771d39f03485cf206df5e0ab05ab76b17aa78abef449aa901c5679b30 Feb 19 22:12:36 crc kubenswrapper[4886]: I0219 22:12:36.875201 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5rdmz"] Feb 19 22:12:37 crc kubenswrapper[4886]: I0219 22:12:37.793709 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctdkq" event={"ID":"d6b3f314-6188-4c12-b687-82f5aea9bfd6","Type":"ContainerStarted","Data":"b2c35f3448e05f9bd0309a4bb350775a70880f33e60f55ed6dcd5be4ce9fe84e"} Feb 19 22:12:37 crc kubenswrapper[4886]: I0219 22:12:37.796062 4886 generic.go:334] "Generic (PLEG): container finished" podID="787d1be6-bb17-4368-a423-42aca50eb005" containerID="5291fb0fcf5469f16ea053d56d31383ae924b355bfe5050617894aa82d960659" exitCode=0 Feb 19 22:12:37 crc kubenswrapper[4886]: I0219 22:12:37.796114 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5rdmz" event={"ID":"787d1be6-bb17-4368-a423-42aca50eb005","Type":"ContainerDied","Data":"5291fb0fcf5469f16ea053d56d31383ae924b355bfe5050617894aa82d960659"} Feb 19 22:12:37 crc kubenswrapper[4886]: I0219 22:12:37.796141 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5rdmz" event={"ID":"787d1be6-bb17-4368-a423-42aca50eb005","Type":"ContainerStarted","Data":"6b072d2771d39f03485cf206df5e0ab05ab76b17aa78abef449aa901c5679b30"} Feb 19 22:12:38 crc kubenswrapper[4886]: I0219 22:12:38.810925 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5rdmz" event={"ID":"787d1be6-bb17-4368-a423-42aca50eb005","Type":"ContainerStarted","Data":"61630d878912af0be39f1efa248ad7657d5c9802788d96f61b5e1f94dedce2e8"} Feb 19 22:12:38 crc kubenswrapper[4886]: I0219 22:12:38.814796 4886 generic.go:334] "Generic (PLEG): container finished" podID="d6b3f314-6188-4c12-b687-82f5aea9bfd6" containerID="b2c35f3448e05f9bd0309a4bb350775a70880f33e60f55ed6dcd5be4ce9fe84e" exitCode=0 Feb 19 22:12:38 crc kubenswrapper[4886]: I0219 22:12:38.814879 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctdkq" event={"ID":"d6b3f314-6188-4c12-b687-82f5aea9bfd6","Type":"ContainerDied","Data":"b2c35f3448e05f9bd0309a4bb350775a70880f33e60f55ed6dcd5be4ce9fe84e"} Feb 19 22:12:39 crc kubenswrapper[4886]: I0219 22:12:39.830154 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctdkq" event={"ID":"d6b3f314-6188-4c12-b687-82f5aea9bfd6","Type":"ContainerStarted","Data":"d234e6c7e3751017a212de9ec67000133516ca09a64509e9c889902a064a4b67"} Feb 19 22:12:39 crc kubenswrapper[4886]: I0219 22:12:39.859310 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ctdkq" podStartSLOduration=3.417570268 podStartE2EDuration="6.859290465s" podCreationTimestamp="2026-02-19 22:12:33 +0000 UTC" firstStartedPulling="2026-02-19 22:12:35.76823689 +0000 UTC m=+4386.396079940" lastFinishedPulling="2026-02-19 22:12:39.209957087 +0000 UTC m=+4389.837800137" observedRunningTime="2026-02-19 22:12:39.846727044 +0000 UTC m=+4390.474570104" watchObservedRunningTime="2026-02-19 22:12:39.859290465 +0000 UTC m=+4390.487133515" Feb 19 22:12:40 crc kubenswrapper[4886]: I0219 22:12:40.851493 4886 generic.go:334] "Generic (PLEG): container finished" podID="787d1be6-bb17-4368-a423-42aca50eb005" containerID="61630d878912af0be39f1efa248ad7657d5c9802788d96f61b5e1f94dedce2e8" exitCode=0 Feb 19 22:12:40 crc kubenswrapper[4886]: I0219 22:12:40.851596 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5rdmz" event={"ID":"787d1be6-bb17-4368-a423-42aca50eb005","Type":"ContainerDied","Data":"61630d878912af0be39f1efa248ad7657d5c9802788d96f61b5e1f94dedce2e8"} Feb 19 22:12:41 crc kubenswrapper[4886]: I0219 22:12:41.868214 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5rdmz" event={"ID":"787d1be6-bb17-4368-a423-42aca50eb005","Type":"ContainerStarted","Data":"2f7d9f7d6d7da147a5d9bac889d3e6e9d28339a9871ae5090e978bbb69f4c1b5"} Feb 19 22:12:41 crc kubenswrapper[4886]: I0219 22:12:41.895703 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5rdmz" podStartSLOduration=3.297704372 podStartE2EDuration="6.895683787s" podCreationTimestamp="2026-02-19 22:12:35 +0000 UTC" firstStartedPulling="2026-02-19 22:12:37.798601052 +0000 UTC m=+4388.426444092" lastFinishedPulling="2026-02-19 22:12:41.396580447 +0000 UTC m=+4392.024423507" observedRunningTime="2026-02-19 22:12:41.885254429 +0000 UTC m=+4392.513097479" watchObservedRunningTime="2026-02-19 22:12:41.895683787 +0000 UTC m=+4392.523526827" Feb 19 22:12:44 crc kubenswrapper[4886]: I0219 22:12:44.268881 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ctdkq" Feb 19 22:12:44 crc kubenswrapper[4886]: I0219 22:12:44.269390 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ctdkq" Feb 19 22:12:45 crc kubenswrapper[4886]: I0219 22:12:45.333179 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ctdkq" podUID="d6b3f314-6188-4c12-b687-82f5aea9bfd6" containerName="registry-server" probeResult="failure" output=< Feb 19 22:12:45 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 22:12:45 crc kubenswrapper[4886]: > Feb 19 22:12:46 crc kubenswrapper[4886]: I0219 22:12:46.356930 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5rdmz" Feb 19 22:12:46 crc kubenswrapper[4886]: I0219 22:12:46.357075 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5rdmz" Feb 19 22:12:46 crc kubenswrapper[4886]: I0219 22:12:46.752079 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5rdmz" Feb 19 22:12:46 crc kubenswrapper[4886]: I0219 22:12:46.965290 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5rdmz" Feb 19 22:12:47 crc kubenswrapper[4886]: I0219 22:12:47.016278 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5rdmz"] Feb 19 22:12:48 crc kubenswrapper[4886]: I0219 22:12:48.936793 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5rdmz" podUID="787d1be6-bb17-4368-a423-42aca50eb005" containerName="registry-server" containerID="cri-o://2f7d9f7d6d7da147a5d9bac889d3e6e9d28339a9871ae5090e978bbb69f4c1b5" gracePeriod=2 Feb 19 22:12:49 crc kubenswrapper[4886]: I0219 22:12:49.524208 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5rdmz" Feb 19 22:12:49 crc kubenswrapper[4886]: I0219 22:12:49.601852 4886 scope.go:117] "RemoveContainer" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" Feb 19 22:12:49 crc kubenswrapper[4886]: E0219 22:12:49.602169 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:12:49 crc kubenswrapper[4886]: I0219 22:12:49.653455 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/787d1be6-bb17-4368-a423-42aca50eb005-catalog-content\") pod \"787d1be6-bb17-4368-a423-42aca50eb005\" (UID: \"787d1be6-bb17-4368-a423-42aca50eb005\") " Feb 19 22:12:49 crc kubenswrapper[4886]: I0219 22:12:49.653593 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvh7w\" (UniqueName: \"kubernetes.io/projected/787d1be6-bb17-4368-a423-42aca50eb005-kube-api-access-pvh7w\") pod \"787d1be6-bb17-4368-a423-42aca50eb005\" (UID: \"787d1be6-bb17-4368-a423-42aca50eb005\") " Feb 19 22:12:49 crc kubenswrapper[4886]: I0219 22:12:49.653814 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/787d1be6-bb17-4368-a423-42aca50eb005-utilities\") pod \"787d1be6-bb17-4368-a423-42aca50eb005\" (UID: \"787d1be6-bb17-4368-a423-42aca50eb005\") " Feb 19 22:12:49 crc kubenswrapper[4886]: I0219 22:12:49.657208 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/787d1be6-bb17-4368-a423-42aca50eb005-utilities" (OuterVolumeSpecName: "utilities") pod "787d1be6-bb17-4368-a423-42aca50eb005" (UID: "787d1be6-bb17-4368-a423-42aca50eb005"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 22:12:49 crc kubenswrapper[4886]: I0219 22:12:49.662679 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/787d1be6-bb17-4368-a423-42aca50eb005-kube-api-access-pvh7w" (OuterVolumeSpecName: "kube-api-access-pvh7w") pod "787d1be6-bb17-4368-a423-42aca50eb005" (UID: "787d1be6-bb17-4368-a423-42aca50eb005"). InnerVolumeSpecName "kube-api-access-pvh7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 22:12:49 crc kubenswrapper[4886]: I0219 22:12:49.754425 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/787d1be6-bb17-4368-a423-42aca50eb005-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "787d1be6-bb17-4368-a423-42aca50eb005" (UID: "787d1be6-bb17-4368-a423-42aca50eb005"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 22:12:49 crc kubenswrapper[4886]: I0219 22:12:49.757182 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/787d1be6-bb17-4368-a423-42aca50eb005-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 22:12:49 crc kubenswrapper[4886]: I0219 22:12:49.757225 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvh7w\" (UniqueName: \"kubernetes.io/projected/787d1be6-bb17-4368-a423-42aca50eb005-kube-api-access-pvh7w\") on node \"crc\" DevicePath \"\"" Feb 19 22:12:49 crc kubenswrapper[4886]: I0219 22:12:49.757240 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/787d1be6-bb17-4368-a423-42aca50eb005-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 22:12:49 crc kubenswrapper[4886]: I0219 22:12:49.951128 4886 generic.go:334] "Generic (PLEG): container finished" podID="787d1be6-bb17-4368-a423-42aca50eb005" containerID="2f7d9f7d6d7da147a5d9bac889d3e6e9d28339a9871ae5090e978bbb69f4c1b5" exitCode=0 Feb 19 22:12:49 crc kubenswrapper[4886]: I0219 22:12:49.951176 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5rdmz" event={"ID":"787d1be6-bb17-4368-a423-42aca50eb005","Type":"ContainerDied","Data":"2f7d9f7d6d7da147a5d9bac889d3e6e9d28339a9871ae5090e978bbb69f4c1b5"} Feb 19 22:12:49 crc kubenswrapper[4886]: I0219 22:12:49.951206 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5rdmz" event={"ID":"787d1be6-bb17-4368-a423-42aca50eb005","Type":"ContainerDied","Data":"6b072d2771d39f03485cf206df5e0ab05ab76b17aa78abef449aa901c5679b30"} Feb 19 22:12:49 crc kubenswrapper[4886]: I0219 22:12:49.951229 4886 scope.go:117] "RemoveContainer" containerID="2f7d9f7d6d7da147a5d9bac889d3e6e9d28339a9871ae5090e978bbb69f4c1b5" Feb 19 22:12:49 crc kubenswrapper[4886]: I0219 22:12:49.951239 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5rdmz" Feb 19 22:12:49 crc kubenswrapper[4886]: I0219 22:12:49.986210 4886 scope.go:117] "RemoveContainer" containerID="61630d878912af0be39f1efa248ad7657d5c9802788d96f61b5e1f94dedce2e8" Feb 19 22:12:49 crc kubenswrapper[4886]: I0219 22:12:49.997449 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5rdmz"] Feb 19 22:12:50 crc kubenswrapper[4886]: I0219 22:12:50.012560 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5rdmz"] Feb 19 22:12:50 crc kubenswrapper[4886]: I0219 22:12:50.020337 4886 scope.go:117] "RemoveContainer" containerID="5291fb0fcf5469f16ea053d56d31383ae924b355bfe5050617894aa82d960659" Feb 19 22:12:50 crc kubenswrapper[4886]: I0219 22:12:50.089974 4886 scope.go:117] "RemoveContainer" containerID="2f7d9f7d6d7da147a5d9bac889d3e6e9d28339a9871ae5090e978bbb69f4c1b5" Feb 19 22:12:50 crc kubenswrapper[4886]: E0219 22:12:50.091063 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f7d9f7d6d7da147a5d9bac889d3e6e9d28339a9871ae5090e978bbb69f4c1b5\": container with ID starting with 2f7d9f7d6d7da147a5d9bac889d3e6e9d28339a9871ae5090e978bbb69f4c1b5 not found: ID does not exist" containerID="2f7d9f7d6d7da147a5d9bac889d3e6e9d28339a9871ae5090e978bbb69f4c1b5" Feb 19 22:12:50 crc kubenswrapper[4886]: I0219 22:12:50.091116 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f7d9f7d6d7da147a5d9bac889d3e6e9d28339a9871ae5090e978bbb69f4c1b5"} err="failed to get container status \"2f7d9f7d6d7da147a5d9bac889d3e6e9d28339a9871ae5090e978bbb69f4c1b5\": rpc error: code = NotFound desc = could not find container \"2f7d9f7d6d7da147a5d9bac889d3e6e9d28339a9871ae5090e978bbb69f4c1b5\": container with ID starting with 2f7d9f7d6d7da147a5d9bac889d3e6e9d28339a9871ae5090e978bbb69f4c1b5 not found: ID does not exist" Feb 19 22:12:50 crc kubenswrapper[4886]: I0219 22:12:50.091151 4886 scope.go:117] "RemoveContainer" containerID="61630d878912af0be39f1efa248ad7657d5c9802788d96f61b5e1f94dedce2e8" Feb 19 22:12:50 crc kubenswrapper[4886]: E0219 22:12:50.091651 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61630d878912af0be39f1efa248ad7657d5c9802788d96f61b5e1f94dedce2e8\": container with ID starting with 61630d878912af0be39f1efa248ad7657d5c9802788d96f61b5e1f94dedce2e8 not found: ID does not exist" containerID="61630d878912af0be39f1efa248ad7657d5c9802788d96f61b5e1f94dedce2e8" Feb 19 22:12:50 crc kubenswrapper[4886]: I0219 22:12:50.091708 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61630d878912af0be39f1efa248ad7657d5c9802788d96f61b5e1f94dedce2e8"} err="failed to get container status \"61630d878912af0be39f1efa248ad7657d5c9802788d96f61b5e1f94dedce2e8\": rpc error: code = NotFound desc = could not find container \"61630d878912af0be39f1efa248ad7657d5c9802788d96f61b5e1f94dedce2e8\": container with ID starting with 61630d878912af0be39f1efa248ad7657d5c9802788d96f61b5e1f94dedce2e8 not found: ID does not exist" Feb 19 22:12:50 crc kubenswrapper[4886]: I0219 22:12:50.091745 4886 scope.go:117] "RemoveContainer" containerID="5291fb0fcf5469f16ea053d56d31383ae924b355bfe5050617894aa82d960659" Feb 19 22:12:50 crc kubenswrapper[4886]: E0219 22:12:50.092218 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5291fb0fcf5469f16ea053d56d31383ae924b355bfe5050617894aa82d960659\": container with ID starting with 5291fb0fcf5469f16ea053d56d31383ae924b355bfe5050617894aa82d960659 not found: ID does not exist" containerID="5291fb0fcf5469f16ea053d56d31383ae924b355bfe5050617894aa82d960659" Feb 19 22:12:50 crc kubenswrapper[4886]: I0219 22:12:50.092252 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5291fb0fcf5469f16ea053d56d31383ae924b355bfe5050617894aa82d960659"} err="failed to get container status \"5291fb0fcf5469f16ea053d56d31383ae924b355bfe5050617894aa82d960659\": rpc error: code = NotFound desc = could not find container \"5291fb0fcf5469f16ea053d56d31383ae924b355bfe5050617894aa82d960659\": container with ID starting with 5291fb0fcf5469f16ea053d56d31383ae924b355bfe5050617894aa82d960659 not found: ID does not exist" Feb 19 22:12:50 crc kubenswrapper[4886]: I0219 22:12:50.622255 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="787d1be6-bb17-4368-a423-42aca50eb005" path="/var/lib/kubelet/pods/787d1be6-bb17-4368-a423-42aca50eb005/volumes" Feb 19 22:12:54 crc kubenswrapper[4886]: I0219 22:12:54.337634 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ctdkq" Feb 19 22:12:54 crc kubenswrapper[4886]: I0219 22:12:54.403644 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ctdkq" Feb 19 22:12:54 crc kubenswrapper[4886]: I0219 22:12:54.583839 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ctdkq"] Feb 19 22:12:56 crc kubenswrapper[4886]: I0219 22:12:56.032675 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ctdkq" podUID="d6b3f314-6188-4c12-b687-82f5aea9bfd6" containerName="registry-server" containerID="cri-o://d234e6c7e3751017a212de9ec67000133516ca09a64509e9c889902a064a4b67" gracePeriod=2 Feb 19 22:12:56 crc kubenswrapper[4886]: I0219 22:12:56.574368 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ctdkq" Feb 19 22:12:56 crc kubenswrapper[4886]: I0219 22:12:56.734332 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b3f314-6188-4c12-b687-82f5aea9bfd6-catalog-content\") pod \"d6b3f314-6188-4c12-b687-82f5aea9bfd6\" (UID: \"d6b3f314-6188-4c12-b687-82f5aea9bfd6\") " Feb 19 22:12:56 crc kubenswrapper[4886]: I0219 22:12:56.734641 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d5gh\" (UniqueName: \"kubernetes.io/projected/d6b3f314-6188-4c12-b687-82f5aea9bfd6-kube-api-access-7d5gh\") pod \"d6b3f314-6188-4c12-b687-82f5aea9bfd6\" (UID: \"d6b3f314-6188-4c12-b687-82f5aea9bfd6\") " Feb 19 22:12:56 crc kubenswrapper[4886]: I0219 22:12:56.734678 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b3f314-6188-4c12-b687-82f5aea9bfd6-utilities\") pod \"d6b3f314-6188-4c12-b687-82f5aea9bfd6\" (UID: \"d6b3f314-6188-4c12-b687-82f5aea9bfd6\") " Feb 19 22:12:56 crc kubenswrapper[4886]: I0219 22:12:56.741667 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6b3f314-6188-4c12-b687-82f5aea9bfd6-kube-api-access-7d5gh" (OuterVolumeSpecName: "kube-api-access-7d5gh") pod "d6b3f314-6188-4c12-b687-82f5aea9bfd6" (UID: "d6b3f314-6188-4c12-b687-82f5aea9bfd6"). InnerVolumeSpecName "kube-api-access-7d5gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 22:12:56 crc kubenswrapper[4886]: I0219 22:12:56.746593 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6b3f314-6188-4c12-b687-82f5aea9bfd6-utilities" (OuterVolumeSpecName: "utilities") pod "d6b3f314-6188-4c12-b687-82f5aea9bfd6" (UID: "d6b3f314-6188-4c12-b687-82f5aea9bfd6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 22:12:56 crc kubenswrapper[4886]: I0219 22:12:56.837787 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d5gh\" (UniqueName: \"kubernetes.io/projected/d6b3f314-6188-4c12-b687-82f5aea9bfd6-kube-api-access-7d5gh\") on node \"crc\" DevicePath \"\"" Feb 19 22:12:56 crc kubenswrapper[4886]: I0219 22:12:56.837826 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b3f314-6188-4c12-b687-82f5aea9bfd6-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 22:12:56 crc kubenswrapper[4886]: I0219 22:12:56.953310 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6b3f314-6188-4c12-b687-82f5aea9bfd6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6b3f314-6188-4c12-b687-82f5aea9bfd6" (UID: "d6b3f314-6188-4c12-b687-82f5aea9bfd6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 22:12:57 crc kubenswrapper[4886]: I0219 22:12:57.042877 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b3f314-6188-4c12-b687-82f5aea9bfd6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 22:12:57 crc kubenswrapper[4886]: I0219 22:12:57.050591 4886 generic.go:334] "Generic (PLEG): container finished" podID="d6b3f314-6188-4c12-b687-82f5aea9bfd6" containerID="d234e6c7e3751017a212de9ec67000133516ca09a64509e9c889902a064a4b67" exitCode=0 Feb 19 22:12:57 crc kubenswrapper[4886]: I0219 22:12:57.050635 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctdkq" event={"ID":"d6b3f314-6188-4c12-b687-82f5aea9bfd6","Type":"ContainerDied","Data":"d234e6c7e3751017a212de9ec67000133516ca09a64509e9c889902a064a4b67"} Feb 19 22:12:57 crc kubenswrapper[4886]: I0219 22:12:57.050661 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctdkq" event={"ID":"d6b3f314-6188-4c12-b687-82f5aea9bfd6","Type":"ContainerDied","Data":"b9b1c379d5f218c6915276cc1a60a32d4b4175aae837357ec7bc6c06d661d0b4"} Feb 19 22:12:57 crc kubenswrapper[4886]: I0219 22:12:57.050677 4886 scope.go:117] "RemoveContainer" containerID="d234e6c7e3751017a212de9ec67000133516ca09a64509e9c889902a064a4b67" Feb 19 22:12:57 crc kubenswrapper[4886]: I0219 22:12:57.050864 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ctdkq" Feb 19 22:12:57 crc kubenswrapper[4886]: I0219 22:12:57.084513 4886 scope.go:117] "RemoveContainer" containerID="b2c35f3448e05f9bd0309a4bb350775a70880f33e60f55ed6dcd5be4ce9fe84e" Feb 19 22:12:57 crc kubenswrapper[4886]: I0219 22:12:57.102540 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ctdkq"] Feb 19 22:12:57 crc kubenswrapper[4886]: I0219 22:12:57.119198 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ctdkq"] Feb 19 22:12:57 crc kubenswrapper[4886]: I0219 22:12:57.132797 4886 scope.go:117] "RemoveContainer" containerID="a3b364e9fdf8aaca7b5748ee55621a80c823eb267d9d009a24c1815c64117ebb" Feb 19 22:12:57 crc kubenswrapper[4886]: I0219 22:12:57.187481 4886 scope.go:117] "RemoveContainer" containerID="d234e6c7e3751017a212de9ec67000133516ca09a64509e9c889902a064a4b67" Feb 19 22:12:57 crc kubenswrapper[4886]: E0219 22:12:57.188549 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d234e6c7e3751017a212de9ec67000133516ca09a64509e9c889902a064a4b67\": container with ID starting with d234e6c7e3751017a212de9ec67000133516ca09a64509e9c889902a064a4b67 not found: ID does not exist" containerID="d234e6c7e3751017a212de9ec67000133516ca09a64509e9c889902a064a4b67" Feb 19 22:12:57 crc kubenswrapper[4886]: I0219 22:12:57.188580 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d234e6c7e3751017a212de9ec67000133516ca09a64509e9c889902a064a4b67"} err="failed to get container status \"d234e6c7e3751017a212de9ec67000133516ca09a64509e9c889902a064a4b67\": rpc error: code = NotFound desc = could not find container \"d234e6c7e3751017a212de9ec67000133516ca09a64509e9c889902a064a4b67\": container with ID starting with d234e6c7e3751017a212de9ec67000133516ca09a64509e9c889902a064a4b67 not found: ID does not exist" Feb 19 22:12:57 crc kubenswrapper[4886]: I0219 22:12:57.188602 4886 scope.go:117] "RemoveContainer" containerID="b2c35f3448e05f9bd0309a4bb350775a70880f33e60f55ed6dcd5be4ce9fe84e" Feb 19 22:12:57 crc kubenswrapper[4886]: E0219 22:12:57.189022 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2c35f3448e05f9bd0309a4bb350775a70880f33e60f55ed6dcd5be4ce9fe84e\": container with ID starting with b2c35f3448e05f9bd0309a4bb350775a70880f33e60f55ed6dcd5be4ce9fe84e not found: ID does not exist" containerID="b2c35f3448e05f9bd0309a4bb350775a70880f33e60f55ed6dcd5be4ce9fe84e" Feb 19 22:12:57 crc kubenswrapper[4886]: I0219 22:12:57.189043 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2c35f3448e05f9bd0309a4bb350775a70880f33e60f55ed6dcd5be4ce9fe84e"} err="failed to get container status \"b2c35f3448e05f9bd0309a4bb350775a70880f33e60f55ed6dcd5be4ce9fe84e\": rpc error: code = NotFound desc = could not find container \"b2c35f3448e05f9bd0309a4bb350775a70880f33e60f55ed6dcd5be4ce9fe84e\": container with ID starting with b2c35f3448e05f9bd0309a4bb350775a70880f33e60f55ed6dcd5be4ce9fe84e not found: ID does not exist" Feb 19 22:12:57 crc kubenswrapper[4886]: I0219 22:12:57.189056 4886 scope.go:117] "RemoveContainer" containerID="a3b364e9fdf8aaca7b5748ee55621a80c823eb267d9d009a24c1815c64117ebb" Feb 19 22:12:57 crc kubenswrapper[4886]: E0219 22:12:57.189449 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3b364e9fdf8aaca7b5748ee55621a80c823eb267d9d009a24c1815c64117ebb\": container with ID starting with a3b364e9fdf8aaca7b5748ee55621a80c823eb267d9d009a24c1815c64117ebb not found: ID does not exist" containerID="a3b364e9fdf8aaca7b5748ee55621a80c823eb267d9d009a24c1815c64117ebb" Feb 19 22:12:57 crc kubenswrapper[4886]: I0219 22:12:57.189473 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3b364e9fdf8aaca7b5748ee55621a80c823eb267d9d009a24c1815c64117ebb"} err="failed to get container status \"a3b364e9fdf8aaca7b5748ee55621a80c823eb267d9d009a24c1815c64117ebb\": rpc error: code = NotFound desc = could not find container \"a3b364e9fdf8aaca7b5748ee55621a80c823eb267d9d009a24c1815c64117ebb\": container with ID starting with a3b364e9fdf8aaca7b5748ee55621a80c823eb267d9d009a24c1815c64117ebb not found: ID does not exist" Feb 19 22:12:58 crc kubenswrapper[4886]: I0219 22:12:58.619570 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6b3f314-6188-4c12-b687-82f5aea9bfd6" path="/var/lib/kubelet/pods/d6b3f314-6188-4c12-b687-82f5aea9bfd6/volumes" Feb 19 22:13:00 crc kubenswrapper[4886]: I0219 22:13:00.011236 4886 trace.go:236] Trace[521215384]: "Calculate volume metrics of prometheus-metric-storage-db for pod openstack/prometheus-metric-storage-0" (19-Feb-2026 22:12:58.819) (total time: 1190ms): Feb 19 22:13:00 crc kubenswrapper[4886]: Trace[521215384]: [1.190690345s] [1.190690345s] END Feb 19 22:13:04 crc kubenswrapper[4886]: I0219 22:13:04.602795 4886 scope.go:117] "RemoveContainer" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" Feb 19 22:13:04 crc kubenswrapper[4886]: E0219 22:13:04.604223 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.114275 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 22:13:18 crc kubenswrapper[4886]: E0219 22:13:18.115313 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787d1be6-bb17-4368-a423-42aca50eb005" containerName="registry-server" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.115331 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="787d1be6-bb17-4368-a423-42aca50eb005" containerName="registry-server" Feb 19 22:13:18 crc kubenswrapper[4886]: E0219 22:13:18.115356 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b3f314-6188-4c12-b687-82f5aea9bfd6" containerName="extract-utilities" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.115364 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b3f314-6188-4c12-b687-82f5aea9bfd6" containerName="extract-utilities" Feb 19 22:13:18 crc kubenswrapper[4886]: E0219 22:13:18.115380 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787d1be6-bb17-4368-a423-42aca50eb005" containerName="extract-utilities" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.115389 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="787d1be6-bb17-4368-a423-42aca50eb005" containerName="extract-utilities" Feb 19 22:13:18 crc kubenswrapper[4886]: E0219 22:13:18.115420 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b3f314-6188-4c12-b687-82f5aea9bfd6" containerName="extract-content" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.115428 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b3f314-6188-4c12-b687-82f5aea9bfd6" containerName="extract-content" Feb 19 22:13:18 crc kubenswrapper[4886]: E0219 22:13:18.115444 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b3f314-6188-4c12-b687-82f5aea9bfd6" containerName="registry-server" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.115450 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b3f314-6188-4c12-b687-82f5aea9bfd6" containerName="registry-server" Feb 19 22:13:18 crc kubenswrapper[4886]: E0219 22:13:18.115473 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787d1be6-bb17-4368-a423-42aca50eb005" containerName="extract-content" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.115480 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="787d1be6-bb17-4368-a423-42aca50eb005" containerName="extract-content" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.115918 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="787d1be6-bb17-4368-a423-42aca50eb005" containerName="registry-server" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.115945 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6b3f314-6188-4c12-b687-82f5aea9bfd6" containerName="registry-server" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.116796 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.119255 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.119876 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.119919 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-pdgpv" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.120172 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.127235 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.174697 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e5a46475-fb7c-41ff-ba13-98139467fd86-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.174766 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8j4w\" (UniqueName: \"kubernetes.io/projected/e5a46475-fb7c-41ff-ba13-98139467fd86-kube-api-access-x8j4w\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.174793 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e5a46475-fb7c-41ff-ba13-98139467fd86-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.174874 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5a46475-fb7c-41ff-ba13-98139467fd86-config-data\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.175197 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.175309 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e5a46475-fb7c-41ff-ba13-98139467fd86-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.175390 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e5a46475-fb7c-41ff-ba13-98139467fd86-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.175479 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e5a46475-fb7c-41ff-ba13-98139467fd86-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.175572 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5a46475-fb7c-41ff-ba13-98139467fd86-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.278007 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.278079 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e5a46475-fb7c-41ff-ba13-98139467fd86-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.278116 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e5a46475-fb7c-41ff-ba13-98139467fd86-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.278161 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e5a46475-fb7c-41ff-ba13-98139467fd86-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.278222 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5a46475-fb7c-41ff-ba13-98139467fd86-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.278253 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e5a46475-fb7c-41ff-ba13-98139467fd86-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.278325 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8j4w\" (UniqueName: \"kubernetes.io/projected/e5a46475-fb7c-41ff-ba13-98139467fd86-kube-api-access-x8j4w\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.278350 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e5a46475-fb7c-41ff-ba13-98139467fd86-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.278429 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5a46475-fb7c-41ff-ba13-98139467fd86-config-data\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.279637 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e5a46475-fb7c-41ff-ba13-98139467fd86-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.280507 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e5a46475-fb7c-41ff-ba13-98139467fd86-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.281158 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e5a46475-fb7c-41ff-ba13-98139467fd86-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.281359 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.282220 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5a46475-fb7c-41ff-ba13-98139467fd86-config-data\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.284933 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e5a46475-fb7c-41ff-ba13-98139467fd86-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.286390 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5a46475-fb7c-41ff-ba13-98139467fd86-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.287979 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e5a46475-fb7c-41ff-ba13-98139467fd86-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.300566 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8j4w\" (UniqueName: \"kubernetes.io/projected/e5a46475-fb7c-41ff-ba13-98139467fd86-kube-api-access-x8j4w\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.317619 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " pod="openstack/tempest-tests-tempest" Feb 19 22:13:18 crc kubenswrapper[4886]: I0219 22:13:18.448636 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 22:13:19 crc kubenswrapper[4886]: I0219 22:13:19.027963 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 22:13:19 crc kubenswrapper[4886]: I0219 22:13:19.307942 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e5a46475-fb7c-41ff-ba13-98139467fd86","Type":"ContainerStarted","Data":"7d0b7a040fd842496ce8718b67f114d63cb8ed82ba1704595d7a17a7109b56c2"} Feb 19 22:13:19 crc kubenswrapper[4886]: I0219 22:13:19.601799 4886 scope.go:117] "RemoveContainer" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" Feb 19 22:13:19 crc kubenswrapper[4886]: E0219 22:13:19.602242 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:13:31 crc kubenswrapper[4886]: I0219 22:13:31.601892 4886 scope.go:117] "RemoveContainer" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" Feb 19 22:13:31 crc kubenswrapper[4886]: E0219 22:13:31.602755 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:13:45 crc kubenswrapper[4886]: I0219 22:13:45.601917 4886 scope.go:117] "RemoveContainer" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" Feb 19 22:13:45 crc kubenswrapper[4886]: E0219 22:13:45.602700 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:13:54 crc kubenswrapper[4886]: E0219 22:13:54.523436 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 19 22:13:54 crc kubenswrapper[4886]: E0219 22:13:54.527553 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8j4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(e5a46475-fb7c-41ff-ba13-98139467fd86): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 22:13:54 crc kubenswrapper[4886]: E0219 22:13:54.528911 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="e5a46475-fb7c-41ff-ba13-98139467fd86" Feb 19 22:13:54 crc kubenswrapper[4886]: E0219 22:13:54.759645 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="e5a46475-fb7c-41ff-ba13-98139467fd86" Feb 19 22:13:57 crc kubenswrapper[4886]: I0219 22:13:57.601914 4886 scope.go:117] "RemoveContainer" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" Feb 19 22:13:57 crc kubenswrapper[4886]: E0219 22:13:57.602813 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:14:08 crc kubenswrapper[4886]: I0219 22:14:08.065230 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 19 22:14:08 crc kubenswrapper[4886]: I0219 22:14:08.609358 4886 scope.go:117] "RemoveContainer" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" Feb 19 22:14:08 crc kubenswrapper[4886]: E0219 22:14:08.610755 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:14:09 crc kubenswrapper[4886]: I0219 22:14:09.935928 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e5a46475-fb7c-41ff-ba13-98139467fd86","Type":"ContainerStarted","Data":"60cd21aa06a4a03dea01de52976a4511f4863d11c5c53499deedf31afa59cf3b"} Feb 19 22:14:09 crc kubenswrapper[4886]: I0219 22:14:09.965044 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.933806421 podStartE2EDuration="53.965023292s" podCreationTimestamp="2026-02-19 22:13:16 +0000 UTC" firstStartedPulling="2026-02-19 22:13:19.027776385 +0000 UTC m=+4429.655619435" lastFinishedPulling="2026-02-19 22:14:08.058993246 +0000 UTC m=+4478.686836306" observedRunningTime="2026-02-19 22:14:09.95887108 +0000 UTC m=+4480.586714140" watchObservedRunningTime="2026-02-19 22:14:09.965023292 +0000 UTC m=+4480.592866342" Feb 19 22:14:20 crc kubenswrapper[4886]: I0219 22:14:20.613078 4886 scope.go:117] "RemoveContainer" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" Feb 19 22:14:20 crc kubenswrapper[4886]: E0219 22:14:20.629019 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:14:32 crc kubenswrapper[4886]: I0219 22:14:32.602155 4886 scope.go:117] "RemoveContainer" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" Feb 19 22:14:32 crc kubenswrapper[4886]: E0219 22:14:32.603001 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:14:46 crc kubenswrapper[4886]: I0219 22:14:46.601425 4886 scope.go:117] "RemoveContainer" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" Feb 19 22:14:46 crc kubenswrapper[4886]: E0219 22:14:46.603604 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:15:00 crc kubenswrapper[4886]: I0219 22:15:00.362128 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525655-b2lk6"] Feb 19 22:15:00 crc kubenswrapper[4886]: I0219 22:15:00.376904 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525655-b2lk6" Feb 19 22:15:00 crc kubenswrapper[4886]: I0219 22:15:00.382725 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 22:15:00 crc kubenswrapper[4886]: I0219 22:15:00.386745 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 22:15:00 crc kubenswrapper[4886]: I0219 22:15:00.448849 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da8cf5b6-801b-42f8-814f-cac5311d2292-config-volume\") pod \"collect-profiles-29525655-b2lk6\" (UID: \"da8cf5b6-801b-42f8-814f-cac5311d2292\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525655-b2lk6" Feb 19 22:15:00 crc kubenswrapper[4886]: I0219 22:15:00.448954 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da8cf5b6-801b-42f8-814f-cac5311d2292-secret-volume\") pod \"collect-profiles-29525655-b2lk6\" (UID: \"da8cf5b6-801b-42f8-814f-cac5311d2292\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525655-b2lk6" Feb 19 22:15:00 crc kubenswrapper[4886]: I0219 22:15:00.449149 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t7cx\" (UniqueName: \"kubernetes.io/projected/da8cf5b6-801b-42f8-814f-cac5311d2292-kube-api-access-4t7cx\") pod \"collect-profiles-29525655-b2lk6\" (UID: \"da8cf5b6-801b-42f8-814f-cac5311d2292\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525655-b2lk6" Feb 19 22:15:00 crc kubenswrapper[4886]: I0219 22:15:00.481408 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525655-b2lk6"] Feb 19 22:15:00 crc kubenswrapper[4886]: I0219 22:15:00.551674 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da8cf5b6-801b-42f8-814f-cac5311d2292-config-volume\") pod \"collect-profiles-29525655-b2lk6\" (UID: \"da8cf5b6-801b-42f8-814f-cac5311d2292\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525655-b2lk6" Feb 19 22:15:00 crc kubenswrapper[4886]: I0219 22:15:00.551728 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da8cf5b6-801b-42f8-814f-cac5311d2292-secret-volume\") pod \"collect-profiles-29525655-b2lk6\" (UID: \"da8cf5b6-801b-42f8-814f-cac5311d2292\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525655-b2lk6" Feb 19 22:15:00 crc kubenswrapper[4886]: I0219 22:15:00.552029 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t7cx\" (UniqueName: \"kubernetes.io/projected/da8cf5b6-801b-42f8-814f-cac5311d2292-kube-api-access-4t7cx\") pod \"collect-profiles-29525655-b2lk6\" (UID: \"da8cf5b6-801b-42f8-814f-cac5311d2292\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525655-b2lk6" Feb 19 22:15:00 crc kubenswrapper[4886]: I0219 22:15:00.560394 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da8cf5b6-801b-42f8-814f-cac5311d2292-config-volume\") pod \"collect-profiles-29525655-b2lk6\" (UID: \"da8cf5b6-801b-42f8-814f-cac5311d2292\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525655-b2lk6" Feb 19 22:15:00 crc kubenswrapper[4886]: I0219 22:15:00.588799 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da8cf5b6-801b-42f8-814f-cac5311d2292-secret-volume\") pod \"collect-profiles-29525655-b2lk6\" (UID: \"da8cf5b6-801b-42f8-814f-cac5311d2292\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525655-b2lk6" Feb 19 22:15:00 crc kubenswrapper[4886]: I0219 22:15:00.598206 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t7cx\" (UniqueName: \"kubernetes.io/projected/da8cf5b6-801b-42f8-814f-cac5311d2292-kube-api-access-4t7cx\") pod \"collect-profiles-29525655-b2lk6\" (UID: \"da8cf5b6-801b-42f8-814f-cac5311d2292\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525655-b2lk6" Feb 19 22:15:00 crc kubenswrapper[4886]: I0219 22:15:00.612604 4886 scope.go:117] "RemoveContainer" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" Feb 19 22:15:00 crc kubenswrapper[4886]: E0219 22:15:00.612858 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:15:00 crc kubenswrapper[4886]: I0219 22:15:00.732921 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525655-b2lk6" Feb 19 22:15:02 crc kubenswrapper[4886]: I0219 22:15:02.090723 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525655-b2lk6"] Feb 19 22:15:02 crc kubenswrapper[4886]: I0219 22:15:02.517766 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525655-b2lk6" event={"ID":"da8cf5b6-801b-42f8-814f-cac5311d2292","Type":"ContainerStarted","Data":"b563403fdf9becd9616b7e8d23ddf84d5d98fa9d9c45625709a0c4c4591aefb3"} Feb 19 22:15:02 crc kubenswrapper[4886]: I0219 22:15:02.518084 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525655-b2lk6" event={"ID":"da8cf5b6-801b-42f8-814f-cac5311d2292","Type":"ContainerStarted","Data":"4239ec69f774517fde27442a170232845a005146a0940c5442a0d2c0dc25199d"} Feb 19 22:15:02 crc kubenswrapper[4886]: I0219 22:15:02.544664 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29525655-b2lk6" podStartSLOduration=2.54463076 podStartE2EDuration="2.54463076s" podCreationTimestamp="2026-02-19 22:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 22:15:02.533502874 +0000 UTC m=+4533.161345924" watchObservedRunningTime="2026-02-19 22:15:02.54463076 +0000 UTC m=+4533.172473810" Feb 19 22:15:03 crc kubenswrapper[4886]: I0219 22:15:03.534906 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525655-b2lk6" event={"ID":"da8cf5b6-801b-42f8-814f-cac5311d2292","Type":"ContainerDied","Data":"b563403fdf9becd9616b7e8d23ddf84d5d98fa9d9c45625709a0c4c4591aefb3"} Feb 19 22:15:03 crc kubenswrapper[4886]: I0219 22:15:03.534514 4886 generic.go:334] "Generic (PLEG): container finished" podID="da8cf5b6-801b-42f8-814f-cac5311d2292" containerID="b563403fdf9becd9616b7e8d23ddf84d5d98fa9d9c45625709a0c4c4591aefb3" exitCode=0 Feb 19 22:15:05 crc kubenswrapper[4886]: I0219 22:15:05.095706 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525655-b2lk6" Feb 19 22:15:05 crc kubenswrapper[4886]: I0219 22:15:05.185084 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t7cx\" (UniqueName: \"kubernetes.io/projected/da8cf5b6-801b-42f8-814f-cac5311d2292-kube-api-access-4t7cx\") pod \"da8cf5b6-801b-42f8-814f-cac5311d2292\" (UID: \"da8cf5b6-801b-42f8-814f-cac5311d2292\") " Feb 19 22:15:05 crc kubenswrapper[4886]: I0219 22:15:05.185195 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da8cf5b6-801b-42f8-814f-cac5311d2292-config-volume\") pod \"da8cf5b6-801b-42f8-814f-cac5311d2292\" (UID: \"da8cf5b6-801b-42f8-814f-cac5311d2292\") " Feb 19 22:15:05 crc kubenswrapper[4886]: I0219 22:15:05.185392 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da8cf5b6-801b-42f8-814f-cac5311d2292-secret-volume\") pod \"da8cf5b6-801b-42f8-814f-cac5311d2292\" (UID: \"da8cf5b6-801b-42f8-814f-cac5311d2292\") " Feb 19 22:15:05 crc kubenswrapper[4886]: I0219 22:15:05.193737 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da8cf5b6-801b-42f8-814f-cac5311d2292-config-volume" (OuterVolumeSpecName: "config-volume") pod "da8cf5b6-801b-42f8-814f-cac5311d2292" (UID: "da8cf5b6-801b-42f8-814f-cac5311d2292"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 22:15:05 crc kubenswrapper[4886]: I0219 22:15:05.201559 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da8cf5b6-801b-42f8-814f-cac5311d2292-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "da8cf5b6-801b-42f8-814f-cac5311d2292" (UID: "da8cf5b6-801b-42f8-814f-cac5311d2292"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 22:15:05 crc kubenswrapper[4886]: I0219 22:15:05.201644 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da8cf5b6-801b-42f8-814f-cac5311d2292-kube-api-access-4t7cx" (OuterVolumeSpecName: "kube-api-access-4t7cx") pod "da8cf5b6-801b-42f8-814f-cac5311d2292" (UID: "da8cf5b6-801b-42f8-814f-cac5311d2292"). InnerVolumeSpecName "kube-api-access-4t7cx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 22:15:05 crc kubenswrapper[4886]: I0219 22:15:05.288331 4886 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da8cf5b6-801b-42f8-814f-cac5311d2292-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 22:15:05 crc kubenswrapper[4886]: I0219 22:15:05.288370 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t7cx\" (UniqueName: \"kubernetes.io/projected/da8cf5b6-801b-42f8-814f-cac5311d2292-kube-api-access-4t7cx\") on node \"crc\" DevicePath \"\"" Feb 19 22:15:05 crc kubenswrapper[4886]: I0219 22:15:05.288384 4886 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da8cf5b6-801b-42f8-814f-cac5311d2292-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 22:15:05 crc kubenswrapper[4886]: I0219 22:15:05.573544 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525655-b2lk6" event={"ID":"da8cf5b6-801b-42f8-814f-cac5311d2292","Type":"ContainerDied","Data":"4239ec69f774517fde27442a170232845a005146a0940c5442a0d2c0dc25199d"} Feb 19 22:15:05 crc kubenswrapper[4886]: I0219 22:15:05.573683 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525655-b2lk6" Feb 19 22:15:05 crc kubenswrapper[4886]: I0219 22:15:05.574733 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4239ec69f774517fde27442a170232845a005146a0940c5442a0d2c0dc25199d" Feb 19 22:15:06 crc kubenswrapper[4886]: I0219 22:15:06.194076 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525610-r8f7l"] Feb 19 22:15:06 crc kubenswrapper[4886]: I0219 22:15:06.204173 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525610-r8f7l"] Feb 19 22:15:06 crc kubenswrapper[4886]: I0219 22:15:06.620347 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2484eb25-a176-45d0-aa84-91ea87297c90" path="/var/lib/kubelet/pods/2484eb25-a176-45d0-aa84-91ea87297c90/volumes" Feb 19 22:15:14 crc kubenswrapper[4886]: I0219 22:15:14.602487 4886 scope.go:117] "RemoveContainer" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" Feb 19 22:15:14 crc kubenswrapper[4886]: E0219 22:15:14.603210 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:15:27 crc kubenswrapper[4886]: I0219 22:15:27.606909 4886 scope.go:117] "RemoveContainer" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" Feb 19 22:15:27 crc kubenswrapper[4886]: E0219 22:15:27.608982 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:15:42 crc kubenswrapper[4886]: I0219 22:15:42.602632 4886 scope.go:117] "RemoveContainer" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" Feb 19 22:15:42 crc kubenswrapper[4886]: E0219 22:15:42.605240 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6stm5_openshift-machine-config-operator(b096c32d-4192-4529-bc55-b05d09004007)\"" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" Feb 19 22:15:47 crc kubenswrapper[4886]: I0219 22:15:47.103173 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-x88rq" podUID="81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:15:53 crc kubenswrapper[4886]: I0219 22:15:53.607235 4886 scope.go:117] "RemoveContainer" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" Feb 19 22:15:55 crc kubenswrapper[4886]: I0219 22:15:55.732665 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerStarted","Data":"3f32c6f0dab5e6f6620d1e0dc38f6bc8583e537892ba22df70a1b0dc20ef3b5e"} Feb 19 22:15:56 crc kubenswrapper[4886]: I0219 22:15:56.116179 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m" podUID="e51c0ebd-f319-41ea-9f7e-17bca0f30b6c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:15:56 crc kubenswrapper[4886]: I0219 22:15:56.236145 4886 patch_prober.go:28] interesting pod/loki-operator-controller-manager-b77f6dcd-4z22f container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:15:56 crc kubenswrapper[4886]: I0219 22:15:56.237993 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" podUID="9ae9d788-4b23-480d-be58-dedda686c24d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:15:56 crc kubenswrapper[4886]: I0219 22:15:56.261246 4886 scope.go:117] "RemoveContainer" containerID="9d4c590245dcb5d08c3997a6797ce4ef4dd9814d0f0f6320b7479d4be99c852f" Feb 19 22:15:56 crc kubenswrapper[4886]: I0219 22:15:56.774229 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" podUID="5fa1c852-1a91-4a17-9edf-42db1180c6a9" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.90:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:15:56 crc kubenswrapper[4886]: I0219 22:15:56.800362 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" podUID="997a5ddf-b07d-45c0-a843-a833e93596da" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:15:56 crc kubenswrapper[4886]: I0219 22:15:56.800365 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" podUID="5fa1c852-1a91-4a17-9edf-42db1180c6a9" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.90:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:15:57 crc kubenswrapper[4886]: I0219 22:15:57.135663 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-x88rq" podUID="81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:15:57 crc kubenswrapper[4886]: I0219 22:15:57.136068 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-x88rq" podUID="81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:15:57 crc kubenswrapper[4886]: I0219 22:15:57.136557 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-x88rq" podUID="81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:15:57 crc kubenswrapper[4886]: I0219 22:15:57.136639 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5n8d9" podUID="d9bf26b3-bce9-456e-a767-cd97c1160e4d" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:15:57 crc kubenswrapper[4886]: I0219 22:15:57.405073 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-fvkbm" podUID="3c83b560-7a5e-4c22-82dd-63e1a422e0d6" containerName="registry-server" probeResult="failure" output=< Feb 19 22:15:57 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 22:15:57 crc kubenswrapper[4886]: > Feb 19 22:15:57 crc kubenswrapper[4886]: I0219 22:15:57.414633 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-fvkbm" podUID="3c83b560-7a5e-4c22-82dd-63e1a422e0d6" containerName="registry-server" probeResult="failure" output=< Feb 19 22:15:57 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 22:15:57 crc kubenswrapper[4886]: > Feb 19 22:15:58 crc kubenswrapper[4886]: I0219 22:15:58.039155 4886 patch_prober.go:28] interesting pod/nmstate-webhook-866bcb46dc-6zf9p container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.85:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:15:58 crc kubenswrapper[4886]: I0219 22:15:58.040151 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6zf9p" podUID="bd3b67b3-9bd9-4b93-bc64-57be5e285a4f" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.85:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:15:58 crc kubenswrapper[4886]: I0219 22:15:58.742509 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-kkxcp" podUID="6ae01887-56db-45d4-bf3e-0e66a8b3fed8" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:15:58 crc kubenswrapper[4886]: I0219 22:15:58.742541 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-kkxcp" podUID="6ae01887-56db-45d4-bf3e-0e66a8b3fed8" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:15:59 crc kubenswrapper[4886]: I0219 22:15:59.045647 4886 patch_prober.go:28] interesting pod/logging-loki-gateway-54d798b65b-ncwth container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:15:59 crc kubenswrapper[4886]: I0219 22:15:59.046531 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" podUID="0749f3c7-3653-4491-a8b4-3327797bb266" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:15:59 crc kubenswrapper[4886]: I0219 22:15:59.046672 4886 patch_prober.go:28] interesting pod/logging-loki-gateway-54d798b65b-ncwth container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:15:59 crc kubenswrapper[4886]: I0219 22:15:59.046949 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" podUID="0749f3c7-3653-4491-a8b4-3327797bb266" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:15:59 crc kubenswrapper[4886]: I0219 22:15:59.140524 4886 patch_prober.go:28] interesting pod/metrics-server-799cc74bc-wv5f6 container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.77:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:15:59 crc kubenswrapper[4886]: I0219 22:15:59.140591 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" podUID="a55e7bbd-33a0-46c7-b08b-bf71421bd1bf" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.77:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:15:59 crc kubenswrapper[4886]: I0219 22:15:59.168946 4886 patch_prober.go:28] interesting pod/logging-loki-gateway-54d798b65b-r4kj2 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:15:59 crc kubenswrapper[4886]: I0219 22:15:59.169442 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" podUID="ae5b30e8-ead2-44ca-bfdd-8e28b23ef040" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:15:59 crc kubenswrapper[4886]: I0219 22:15:59.169183 4886 patch_prober.go:28] interesting pod/logging-loki-gateway-54d798b65b-r4kj2 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": context deadline exceeded" start-of-body= Feb 19 22:15:59 crc kubenswrapper[4886]: I0219 22:15:59.169907 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" podUID="ae5b30e8-ead2-44ca-bfdd-8e28b23ef040" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": context deadline exceeded" Feb 19 22:15:59 crc kubenswrapper[4886]: I0219 22:15:59.552555 4886 patch_prober.go:28] interesting pod/monitoring-plugin-6b66dd58b-2rt7q container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.78:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:15:59 crc kubenswrapper[4886]: I0219 22:15:59.552631 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-6b66dd58b-2rt7q" podUID="773be00e-ec8f-4ad1-b356-5d80fda75835" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.78:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:15:59 crc kubenswrapper[4886]: I0219 22:15:59.867886 4886 patch_prober.go:28] interesting pod/oauth-openshift-79b5c48459-6tqsd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:15:59 crc kubenswrapper[4886]: I0219 22:15:59.867930 4886 patch_prober.go:28] interesting pod/oauth-openshift-79b5c48459-6tqsd container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:15:59 crc kubenswrapper[4886]: I0219 22:15:59.868282 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" podUID="1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:15:59 crc kubenswrapper[4886]: I0219 22:15:59.868315 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" podUID="1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:00 crc kubenswrapper[4886]: I0219 22:16:00.352454 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" podUID="dbea0765-f1be-4f22-a192-686a73112963" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:00 crc kubenswrapper[4886]: I0219 22:16:00.545491 4886 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-x82g8 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:00 crc kubenswrapper[4886]: I0219 22:16:00.545586 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" podUID="d79537f2-b8d8-4f6f-8c38-65701d8c1c77" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:00 crc kubenswrapper[4886]: I0219 22:16:00.567949 4886 patch_prober.go:28] interesting pod/console-operator-58897d9998-77m65 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:00 crc kubenswrapper[4886]: I0219 22:16:00.567985 4886 patch_prober.go:28] interesting pod/console-operator-58897d9998-77m65 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:00 crc kubenswrapper[4886]: I0219 22:16:00.568039 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-77m65" podUID="5d6ed5aa-d2e5-4622-8506-4ef0502af8c2" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:00 crc kubenswrapper[4886]: I0219 22:16:00.568027 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-77m65" podUID="5d6ed5aa-d2e5-4622-8506-4ef0502af8c2" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:00 crc kubenswrapper[4886]: I0219 22:16:00.744217 4886 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54js container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:00 crc kubenswrapper[4886]: I0219 22:16:00.744298 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" podUID="dbb72093-2c40-4a92-b1bf-18d8175fb1c8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:00 crc kubenswrapper[4886]: I0219 22:16:00.744217 4886 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54js container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:00 crc kubenswrapper[4886]: I0219 22:16:00.744414 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" podUID="dbb72093-2c40-4a92-b1bf-18d8175fb1c8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:00 crc kubenswrapper[4886]: I0219 22:16:00.781419 4886 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-l7kln container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:00 crc kubenswrapper[4886]: I0219 22:16:00.781436 4886 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-l7kln container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:00 crc kubenswrapper[4886]: I0219 22:16:00.781473 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" podUID="762d12a4-6d88-4715-923e-916dfc4ecad3" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:00 crc kubenswrapper[4886]: I0219 22:16:00.781484 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" podUID="762d12a4-6d88-4715-923e-916dfc4ecad3" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:00 crc kubenswrapper[4886]: I0219 22:16:00.800018 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3" containerName="galera" probeResult="failure" output="command timed out" Feb 19 22:16:00 crc kubenswrapper[4886]: I0219 22:16:00.800023 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3" containerName="galera" probeResult="failure" output="command timed out" Feb 19 22:16:00 crc kubenswrapper[4886]: I0219 22:16:00.843464 4886 patch_prober.go:28] interesting pod/thanos-querier-bc79bc97-87qbv container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.75:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:00 crc kubenswrapper[4886]: I0219 22:16:00.843527 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" podUID="e0c1d73e-0ad8-46cc-afbc-d19899896bdd" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.75:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:01 crc kubenswrapper[4886]: I0219 22:16:01.038457 4886 patch_prober.go:28] interesting pod/router-default-5444994796-2wz7p container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:01 crc kubenswrapper[4886]: I0219 22:16:01.038524 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-2wz7p" podUID="6b171410-3c97-4589-b62d-2190a13cbb3e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:01 crc kubenswrapper[4886]: I0219 22:16:01.039123 4886 patch_prober.go:28] interesting pod/router-default-5444994796-2wz7p container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:01 crc kubenswrapper[4886]: I0219 22:16:01.039152 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-2wz7p" podUID="6b171410-3c97-4589-b62d-2190a13cbb3e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:01 crc kubenswrapper[4886]: I0219 22:16:01.039343 4886 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-t8hr5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:01 crc kubenswrapper[4886]: I0219 22:16:01.039362 4886 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-t8hr5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:01 crc kubenswrapper[4886]: I0219 22:16:01.039388 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" podUID="0fd9b417-e431-4afc-b6a7-5c269fa04171" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:01 crc kubenswrapper[4886]: I0219 22:16:01.039392 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" podUID="0fd9b417-e431-4afc-b6a7-5c269fa04171" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:01 crc kubenswrapper[4886]: I0219 22:16:01.044614 4886 patch_prober.go:28] interesting pod/controller-manager-677cd87946-f626n container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:01 crc kubenswrapper[4886]: I0219 22:16:01.044637 4886 patch_prober.go:28] interesting pod/controller-manager-677cd87946-f626n container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:01 crc kubenswrapper[4886]: I0219 22:16:01.044675 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-677cd87946-f626n" podUID="aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:01 crc kubenswrapper[4886]: I0219 22:16:01.044683 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-677cd87946-f626n" podUID="aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:01 crc kubenswrapper[4886]: I0219 22:16:01.088014 4886 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-s659c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:01 crc kubenswrapper[4886]: I0219 22:16:01.088072 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" podUID="22d58380-a045-498b-aa0e-d07a603210ff" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:01 crc kubenswrapper[4886]: I0219 22:16:01.489450 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-gc6d9" podUID="7474309f-a146-43e6-bd0d-03c678b50e92" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:01 crc kubenswrapper[4886]: I0219 22:16:01.489451 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-gc6d9" podUID="7474309f-a146-43e6-bd0d-03c678b50e92" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:01 crc kubenswrapper[4886]: I0219 22:16:01.792131 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="aeb6523b-7fed-4c9a-87c2-b531f22c9a1c" containerName="galera" probeResult="failure" output="command timed out" Feb 19 22:16:01 crc kubenswrapper[4886]: I0219 22:16:01.792433 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="aeb6523b-7fed-4c9a-87c2-b531f22c9a1c" containerName="galera" probeResult="failure" output="command timed out" Feb 19 22:16:01 crc kubenswrapper[4886]: I0219 22:16:01.877445 4886 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-txj4p container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:01 crc kubenswrapper[4886]: I0219 22:16:01.877482 4886 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-txj4p container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:01 crc kubenswrapper[4886]: I0219 22:16:01.877523 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" podUID="adc7f4d1-4f6b-4a8a-843e-119a248a1e17" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:01 crc kubenswrapper[4886]: I0219 22:16:01.877590 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" podUID="adc7f4d1-4f6b-4a8a-843e-119a248a1e17" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:02 crc kubenswrapper[4886]: I0219 22:16:02.796735 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="1e98987f-2584-4d4c-ae5e-7fd6bdb947d5" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 19 22:16:05 crc kubenswrapper[4886]: I0219 22:16:05.003844 4886 patch_prober.go:28] interesting pod/route-controller-manager-7c7c79bc7d-qmtdf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:05 crc kubenswrapper[4886]: I0219 22:16:05.003863 4886 patch_prober.go:28] interesting pod/route-controller-manager-7c7c79bc7d-qmtdf container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:05 crc kubenswrapper[4886]: I0219 22:16:05.005684 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" podUID="2e90185c-2cd5-48d5-9e61-43020c0e21de" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:05 crc kubenswrapper[4886]: I0219 22:16:05.005775 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" podUID="2e90185c-2cd5-48d5-9e61-43020c0e21de" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:06 crc kubenswrapper[4886]: I0219 22:16:06.619493 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" podUID="5fa1c852-1a91-4a17-9edf-42db1180c6a9" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.90:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:06 crc kubenswrapper[4886]: I0219 22:16:06.619584 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" podUID="5fa1c852-1a91-4a17-9edf-42db1180c6a9" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.90:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:06 crc kubenswrapper[4886]: I0219 22:16:06.903961 4886 trace.go:236] Trace[689722868]: "Calculate volume metrics of mysql-db for pod openstack/openstack-cell1-galera-0" (19-Feb-2026 22:16:05.316) (total time: 1571ms): Feb 19 22:16:06 crc kubenswrapper[4886]: Trace[689722868]: [1.571442587s] [1.571442587s] END Feb 19 22:16:07 crc kubenswrapper[4886]: I0219 22:16:07.034525 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-fvkbm" podUID="3c83b560-7a5e-4c22-82dd-63e1a422e0d6" containerName="registry-server" probeResult="failure" output=< Feb 19 22:16:07 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 22:16:07 crc kubenswrapper[4886]: > Feb 19 22:16:07 crc kubenswrapper[4886]: I0219 22:16:07.034532 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-fvkbm" podUID="3c83b560-7a5e-4c22-82dd-63e1a422e0d6" containerName="registry-server" probeResult="failure" output=< Feb 19 22:16:07 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 22:16:07 crc kubenswrapper[4886]: > Feb 19 22:16:07 crc kubenswrapper[4886]: I0219 22:16:07.100525 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-x88rq" podUID="81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:07 crc kubenswrapper[4886]: I0219 22:16:07.101254 4886 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:07 crc kubenswrapper[4886]: I0219 22:16:07.101341 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:07 crc kubenswrapper[4886]: I0219 22:16:07.103667 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-x88rq" Feb 19 22:16:07 crc kubenswrapper[4886]: I0219 22:16:07.109653 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"635568fb4ab95e5da295d6830626e8657b24801b28b46144b2a90dbd81560f07"} pod="metallb-system/frr-k8s-x88rq" containerMessage="Container frr failed liveness probe, will be restarted" Feb 19 22:16:07 crc kubenswrapper[4886]: I0219 22:16:07.111334 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-x88rq" podUID="81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5" containerName="frr" containerID="cri-o://635568fb4ab95e5da295d6830626e8657b24801b28b46144b2a90dbd81560f07" gracePeriod=2 Feb 19 22:16:07 crc kubenswrapper[4886]: I0219 22:16:07.503052 4886 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:07 crc kubenswrapper[4886]: I0219 22:16:07.503134 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:07 crc kubenswrapper[4886]: I0219 22:16:07.866673 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x88rq" event={"ID":"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5","Type":"ContainerDied","Data":"635568fb4ab95e5da295d6830626e8657b24801b28b46144b2a90dbd81560f07"} Feb 19 22:16:07 crc kubenswrapper[4886]: I0219 22:16:07.867630 4886 generic.go:334] "Generic (PLEG): container finished" podID="81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5" containerID="635568fb4ab95e5da295d6830626e8657b24801b28b46144b2a90dbd81560f07" exitCode=143 Feb 19 22:16:08 crc kubenswrapper[4886]: I0219 22:16:08.905993 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-xrdmp" podUID="be8c5521-0d01-4afe-9a3e-7ee2ba8d014c" containerName="registry-server" probeResult="failure" output=< Feb 19 22:16:08 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 22:16:08 crc kubenswrapper[4886]: > Feb 19 22:16:08 crc kubenswrapper[4886]: I0219 22:16:08.906007 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-99mqn" podUID="eebd68ba-8223-48ca-ad2a-fdc786eedad2" containerName="registry-server" probeResult="failure" output=< Feb 19 22:16:08 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 22:16:08 crc kubenswrapper[4886]: > Feb 19 22:16:08 crc kubenswrapper[4886]: I0219 22:16:08.907352 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x88rq" event={"ID":"81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5","Type":"ContainerStarted","Data":"a99b3da372bec0577f18d615f36986d06d80353a6f8bc5aa150239377ddbce0d"} Feb 19 22:16:09 crc kubenswrapper[4886]: I0219 22:16:09.093102 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-xrdmp" podUID="be8c5521-0d01-4afe-9a3e-7ee2ba8d014c" containerName="registry-server" probeResult="failure" output=< Feb 19 22:16:09 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 22:16:09 crc kubenswrapper[4886]: > Feb 19 22:16:09 crc kubenswrapper[4886]: I0219 22:16:09.095174 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-99mqn" podUID="eebd68ba-8223-48ca-ad2a-fdc786eedad2" containerName="registry-server" probeResult="failure" output=< Feb 19 22:16:09 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 22:16:09 crc kubenswrapper[4886]: > Feb 19 22:16:09 crc kubenswrapper[4886]: I0219 22:16:09.099521 4886 patch_prober.go:28] interesting pod/metrics-server-799cc74bc-wv5f6 container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.77:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:09 crc kubenswrapper[4886]: I0219 22:16:09.099573 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" podUID="a55e7bbd-33a0-46c7-b08b-bf71421bd1bf" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.77:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:09 crc kubenswrapper[4886]: I0219 22:16:09.101200 4886 patch_prober.go:28] interesting pod/metrics-server-799cc74bc-wv5f6 container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:09 crc kubenswrapper[4886]: I0219 22:16:09.101238 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" podUID="a55e7bbd-33a0-46c7-b08b-bf71421bd1bf" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.77:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:09 crc kubenswrapper[4886]: I0219 22:16:09.551907 4886 patch_prober.go:28] interesting pod/monitoring-plugin-6b66dd58b-2rt7q container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.78:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:09 crc kubenswrapper[4886]: I0219 22:16:09.551985 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-6b66dd58b-2rt7q" podUID="773be00e-ec8f-4ad1-b356-5d80fda75835" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.78:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:09 crc kubenswrapper[4886]: I0219 22:16:09.721022 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-6d62b" podUID="ef72a766-0d85-430a-ab0d-f0eda86f582f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:09 crc kubenswrapper[4886]: I0219 22:16:09.721093 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-6d62b" podUID="ef72a766-0d85-430a-ab0d-f0eda86f582f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:09 crc kubenswrapper[4886]: I0219 22:16:09.868592 4886 patch_prober.go:28] interesting pod/oauth-openshift-79b5c48459-6tqsd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:09 crc kubenswrapper[4886]: I0219 22:16:09.868612 4886 patch_prober.go:28] interesting pod/oauth-openshift-79b5c48459-6tqsd container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:09 crc kubenswrapper[4886]: I0219 22:16:09.868654 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" podUID="1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:09 crc kubenswrapper[4886]: I0219 22:16:09.868742 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" podUID="1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:10 crc kubenswrapper[4886]: I0219 22:16:10.089817 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-wnld7" podUID="989aa724-d476-4df1-9849-22c3acf90103" containerName="registry-server" probeResult="failure" output=< Feb 19 22:16:10 crc kubenswrapper[4886]: timeout: health rpc did not complete within 1s Feb 19 22:16:10 crc kubenswrapper[4886]: > Feb 19 22:16:10 crc kubenswrapper[4886]: I0219 22:16:10.094666 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-wnld7" podUID="989aa724-d476-4df1-9849-22c3acf90103" containerName="registry-server" probeResult="failure" output=< Feb 19 22:16:10 crc kubenswrapper[4886]: timeout: health rpc did not complete within 1s Feb 19 22:16:10 crc kubenswrapper[4886]: > Feb 19 22:16:10 crc kubenswrapper[4886]: I0219 22:16:10.668463 4886 patch_prober.go:28] interesting pod/console-operator-58897d9998-77m65 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:10 crc kubenswrapper[4886]: I0219 22:16:10.669127 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-77m65" podUID="5d6ed5aa-d2e5-4622-8506-4ef0502af8c2" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:10 crc kubenswrapper[4886]: I0219 22:16:10.668447 4886 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-x82g8 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:10 crc kubenswrapper[4886]: I0219 22:16:10.669483 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" podUID="d79537f2-b8d8-4f6f-8c38-65701d8c1c77" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:10 crc kubenswrapper[4886]: I0219 22:16:10.668545 4886 patch_prober.go:28] interesting pod/console-operator-58897d9998-77m65 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:10 crc kubenswrapper[4886]: I0219 22:16:10.669565 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-77m65" podUID="5d6ed5aa-d2e5-4622-8506-4ef0502af8c2" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:10 crc kubenswrapper[4886]: I0219 22:16:10.744036 4886 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54js container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:10 crc kubenswrapper[4886]: I0219 22:16:10.744135 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" podUID="dbb72093-2c40-4a92-b1bf-18d8175fb1c8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:10 crc kubenswrapper[4886]: I0219 22:16:10.744229 4886 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54js container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:10 crc kubenswrapper[4886]: I0219 22:16:10.744317 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" podUID="dbb72093-2c40-4a92-b1bf-18d8175fb1c8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:10 crc kubenswrapper[4886]: I0219 22:16:10.783498 4886 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-l7kln container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:10 crc kubenswrapper[4886]: I0219 22:16:10.783925 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" podUID="762d12a4-6d88-4715-923e-916dfc4ecad3" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:10 crc kubenswrapper[4886]: I0219 22:16:10.783610 4886 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-l7kln container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:10 crc kubenswrapper[4886]: I0219 22:16:10.784179 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" podUID="762d12a4-6d88-4715-923e-916dfc4ecad3" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:10 crc kubenswrapper[4886]: I0219 22:16:10.812955 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3" containerName="galera" probeResult="failure" output="command timed out" Feb 19 22:16:10 crc kubenswrapper[4886]: I0219 22:16:10.812988 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3" containerName="galera" probeResult="failure" output="command timed out" Feb 19 22:16:11 crc kubenswrapper[4886]: I0219 22:16:11.035515 4886 patch_prober.go:28] interesting pod/router-default-5444994796-2wz7p container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:11 crc kubenswrapper[4886]: I0219 22:16:11.035607 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-2wz7p" podUID="6b171410-3c97-4589-b62d-2190a13cbb3e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:11 crc kubenswrapper[4886]: I0219 22:16:11.035515 4886 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-t8hr5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:11 crc kubenswrapper[4886]: I0219 22:16:11.035666 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" podUID="0fd9b417-e431-4afc-b6a7-5c269fa04171" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:11 crc kubenswrapper[4886]: I0219 22:16:11.035538 4886 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-t8hr5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:11 crc kubenswrapper[4886]: I0219 22:16:11.035702 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" podUID="0fd9b417-e431-4afc-b6a7-5c269fa04171" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:11 crc kubenswrapper[4886]: I0219 22:16:11.035567 4886 patch_prober.go:28] interesting pod/router-default-5444994796-2wz7p container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:11 crc kubenswrapper[4886]: I0219 22:16:11.035728 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-2wz7p" podUID="6b171410-3c97-4589-b62d-2190a13cbb3e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:11 crc kubenswrapper[4886]: I0219 22:16:11.044327 4886 patch_prober.go:28] interesting pod/controller-manager-677cd87946-f626n container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:11 crc kubenswrapper[4886]: I0219 22:16:11.044358 4886 patch_prober.go:28] interesting pod/controller-manager-677cd87946-f626n container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:11 crc kubenswrapper[4886]: I0219 22:16:11.044405 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-677cd87946-f626n" podUID="aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:11 crc kubenswrapper[4886]: I0219 22:16:11.044433 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-677cd87946-f626n" podUID="aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:11 crc kubenswrapper[4886]: I0219 22:16:11.062120 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-x88rq" Feb 19 22:16:11 crc kubenswrapper[4886]: I0219 22:16:11.067204 4886 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-s659c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:11 crc kubenswrapper[4886]: I0219 22:16:11.067295 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" podUID="22d58380-a045-498b-aa0e-d07a603210ff" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:11 crc kubenswrapper[4886]: I0219 22:16:11.067607 4886 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-s659c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:11 crc kubenswrapper[4886]: I0219 22:16:11.067681 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" podUID="22d58380-a045-498b-aa0e-d07a603210ff" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:11 crc kubenswrapper[4886]: I0219 22:16:11.815095 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="aeb6523b-7fed-4c9a-87c2-b531f22c9a1c" containerName="galera" probeResult="failure" output="command timed out" Feb 19 22:16:11 crc kubenswrapper[4886]: I0219 22:16:11.815102 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="aeb6523b-7fed-4c9a-87c2-b531f22c9a1c" containerName="galera" probeResult="failure" output="command timed out" Feb 19 22:16:11 crc kubenswrapper[4886]: I0219 22:16:11.877075 4886 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-txj4p container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:11 crc kubenswrapper[4886]: I0219 22:16:11.877147 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" podUID="adc7f4d1-4f6b-4a8a-843e-119a248a1e17" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:11 crc kubenswrapper[4886]: I0219 22:16:11.877225 4886 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-txj4p container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:11 crc kubenswrapper[4886]: I0219 22:16:11.877242 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" podUID="adc7f4d1-4f6b-4a8a-843e-119a248a1e17" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:12 crc kubenswrapper[4886]: I0219 22:16:12.105136 4886 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-x88rq" podUID="81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:12 crc kubenswrapper[4886]: I0219 22:16:12.805356 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="a30f0477-38c1-4a41-a633-81628dbab75a" containerName="prometheus" probeResult="failure" output="command timed out" Feb 19 22:16:12 crc kubenswrapper[4886]: I0219 22:16:12.805398 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="a30f0477-38c1-4a41-a633-81628dbab75a" containerName="prometheus" probeResult="failure" output="command timed out" Feb 19 22:16:12 crc kubenswrapper[4886]: I0219 22:16:12.805398 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="1e98987f-2584-4d4c-ae5e-7fd6bdb947d5" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 19 22:16:13 crc kubenswrapper[4886]: I0219 22:16:13.006708 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5c7789bd79-6tsmf" podUID="3f77b85f-2936-4a1e-80b3-610a06f7dbe3" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 19 22:16:13 crc kubenswrapper[4886]: I0219 22:16:13.017755 4886 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-rrspr container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:13 crc kubenswrapper[4886]: I0219 22:16:13.017827 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr" podUID="7ace2275-5b80-431f-8fda-ca350848bc07" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:13 crc kubenswrapper[4886]: I0219 22:16:13.135982 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="5fd4db44-9fcc-4954-9896-7f47be765647" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.167:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:13 crc kubenswrapper[4886]: I0219 22:16:13.136044 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="5fd4db44-9fcc-4954-9896-7f47be765647" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.167:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:13 crc kubenswrapper[4886]: I0219 22:16:13.209898 4886 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-n662q container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:13 crc kubenswrapper[4886]: I0219 22:16:13.209965 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" podUID="cc980d53-db1b-43e3-9922-ea78f89031d2" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:13 crc kubenswrapper[4886]: I0219 22:16:13.275622 4886 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-bx48p container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:13 crc kubenswrapper[4886]: I0219 22:16:13.275701 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p" podUID="83c174ac-6edf-4973-b8d0-dc71b548f1c9" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:13 crc kubenswrapper[4886]: I0219 22:16:13.816523 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ggvsr" podUID="d5e2840a-8a17-4ddd-92e5-d033222d3dee" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:13 crc kubenswrapper[4886]: I0219 22:16:13.816530 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ggvsr" podUID="d5e2840a-8a17-4ddd-92e5-d033222d3dee" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:13 crc kubenswrapper[4886]: I0219 22:16:13.981579 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-dm5th" podUID="6b93dd73-4b64-418b-aa60-511213b8f1fd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:13 crc kubenswrapper[4886]: I0219 22:16:13.981704 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dcg9l" podUID="06cb83ff-29f5-438f-87b0-32bb5899552d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:13 crc kubenswrapper[4886]: I0219 22:16:13.982066 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dcg9l" podUID="06cb83ff-29f5-438f-87b0-32bb5899552d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:13 crc kubenswrapper[4886]: I0219 22:16:13.982293 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-dm5th" podUID="6b93dd73-4b64-418b-aa60-511213b8f1fd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.018139 4886 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-rrspr container/loki-distributor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.50:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.018302 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr" podUID="7ace2275-5b80-431f-8fda-ca350848bc07" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.50:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.064637 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-mvk6x" podUID="b6a90270-aa6d-4792-96bc-333bff7f15df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.064637 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-mvk6x" podUID="b6a90270-aa6d-4792-96bc-333bff7f15df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.064878 4886 patch_prober.go:28] interesting pod/logging-loki-gateway-54d798b65b-ncwth container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.064965 4886 patch_prober.go:28] interesting pod/logging-loki-gateway-54d798b65b-ncwth container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.064970 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" podUID="0749f3c7-3653-4491-a8b4-3327797bb266" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.065059 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" podUID="0749f3c7-3653-4491-a8b4-3327797bb266" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.176566 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-l59mx" podUID="3d51e481-ad0d-4d45-b0ee-7ce02b1c428d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.191973 4886 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.192053 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="d1d2e425-8e15-48fc-a486-535954b89459" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.192569 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-l59mx" podUID="3d51e481-ad0d-4d45-b0ee-7ce02b1c428d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.192633 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6kxv6" podUID="14b2ecba-fa5a-41f5-90d5-5085e30e277e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.209139 4886 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-n662q container/loki-querier namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.51:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.209204 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" podUID="cc980d53-db1b-43e3-9922-ea78f89031d2" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.51:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.232334 4886 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.232409 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="b703c587-ef88-4e74-a6a5-c71a11625f76" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.274897 4886 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-bx48p container/loki-query-frontend namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.274981 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p" podUID="83c174ac-6edf-4973-b8d0-dc71b548f1c9" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.275431 4886 patch_prober.go:28] interesting pod/logging-loki-gateway-54d798b65b-r4kj2 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.275461 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" podUID="ae5b30e8-ead2-44ca-bfdd-8e28b23ef040" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.275536 4886 patch_prober.go:28] interesting pod/logging-loki-gateway-54d798b65b-r4kj2 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.275601 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" podUID="ae5b30e8-ead2-44ca-bfdd-8e28b23ef040" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.275657 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6kxv6" podUID="14b2ecba-fa5a-41f5-90d5-5085e30e277e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.358596 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-nj4ct" podUID="5c4a962c-02bb-48ff-9444-db393b42a9b0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.358589 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-s5rp2" podUID="56992c82-2769-4a27-ac41-864dda46aa88" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.437674 4886 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.437959 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="6ff2edc4-4f98-4d66-84f7-24a345741eec" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.441429 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-s5rp2" podUID="56992c82-2769-4a27-ac41-864dda46aa88" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.441457 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-2cpst" podUID="fe935d54-8e74-4df9-a450-19df5d20b568" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.441456 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-nj4ct" podUID="5c4a962c-02bb-48ff-9444-db393b42a9b0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.524435 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-2cpst" podUID="fe935d54-8e74-4df9-a450-19df5d20b568" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.524536 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-7c2qx" podUID="6554dbf1-0f46-434a-902d-9aa5bbd055d8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.610162 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-l2kk9" podUID="17769417-6658-4daf-8268-e92194198b5c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.610398 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-7c2qx" podUID="6554dbf1-0f46-434a-902d-9aa5bbd055d8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.734775 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-l2kk9" podUID="17769417-6658-4daf-8268-e92194198b5c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.734833 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wxjwn" podUID="16e0b754-3cd3-433d-80c6-11363689e9c3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.734757 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-5484b6858b-lrr24" podUID="67db5487-865b-4ce2-8ade-a87f6909b85d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.734915 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-5576fd9fcc-hfcqz" podUID="5537cddb-9e8f-4097-9228-e741c3145b56" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.817606 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-7866795846-znmcl" podUID="7d34c65a-18e8-4709-856b-232ceae77630" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.900446 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-znrbn" podUID="3a08ee98-4149-4379-bb3d-e05dd76f5c8d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.900446 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-5484b6858b-lrr24" podUID="67db5487-865b-4ce2-8ade-a87f6909b85d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.900622 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-7866795846-znmcl" podUID="7d34c65a-18e8-4709-856b-232ceae77630" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.900749 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-znrbn" podUID="3a08ee98-4149-4379-bb3d-e05dd76f5c8d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.920245 4886 patch_prober.go:28] interesting pod/route-controller-manager-7c7c79bc7d-qmtdf container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.920308 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" podUID="2e90185c-2cd5-48d5-9e61-43020c0e21de" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.920331 4886 patch_prober.go:28] interesting pod/route-controller-manager-7c7c79bc7d-qmtdf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.920391 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" podUID="2e90185c-2cd5-48d5-9e61-43020c0e21de" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.993870 4886 patch_prober.go:28] interesting pod/logging-loki-gateway-54d798b65b-r4kj2 container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.54:8081/live\": EOF" start-of-body= Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.993948 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" podUID="ae5b30e8-ead2-44ca-bfdd-8e28b23ef040" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/live\": EOF" Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.993881 4886 patch_prober.go:28] interesting pod/logging-loki-gateway-54d798b65b-ncwth container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.53:8081/live\": EOF" start-of-body= Feb 19 22:16:14 crc kubenswrapper[4886]: I0219 22:16:14.994107 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" podUID="0749f3c7-3653-4491-a8b4-3327797bb266" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.53:8081/live\": EOF" Feb 19 22:16:15 crc kubenswrapper[4886]: I0219 22:16:15.046573 4886 patch_prober.go:28] interesting pod/logging-loki-gateway-54d798b65b-ncwth container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.53:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:15 crc kubenswrapper[4886]: I0219 22:16:15.046648 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" podUID="0749f3c7-3653-4491-a8b4-3327797bb266" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:15 crc kubenswrapper[4886]: I0219 22:16:15.168522 4886 patch_prober.go:28] interesting pod/logging-loki-gateway-54d798b65b-r4kj2 container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.54:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:15 crc kubenswrapper[4886]: I0219 22:16:15.168847 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" podUID="ae5b30e8-ead2-44ca-bfdd-8e28b23ef040" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:15 crc kubenswrapper[4886]: I0219 22:16:15.192014 4886 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:15 crc kubenswrapper[4886]: I0219 22:16:15.192079 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-ingester-0" podUID="d1d2e425-8e15-48fc-a486-535954b89459" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:15 crc kubenswrapper[4886]: I0219 22:16:15.231912 4886 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.58:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:15 crc kubenswrapper[4886]: I0219 22:16:15.232217 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-compactor-0" podUID="b703c587-ef88-4e74-a6a5-c71a11625f76" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.58:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:15 crc kubenswrapper[4886]: I0219 22:16:15.437629 4886 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.60:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:15 crc kubenswrapper[4886]: I0219 22:16:15.437707 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="6ff2edc4-4f98-4d66-84f7-24a345741eec" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.60:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:15 crc kubenswrapper[4886]: I0219 22:16:15.906369 4886 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8nnq6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:15 crc kubenswrapper[4886]: I0219 22:16:15.906431 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8nnq6" podUID="4f34ced6-828e-4337-8aad-b2ce35c35793" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:15 crc kubenswrapper[4886]: I0219 22:16:15.906500 4886 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8nnq6 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:15 crc kubenswrapper[4886]: I0219 22:16:15.906513 4886 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-pnltp container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.26:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:15 crc kubenswrapper[4886]: I0219 22:16:15.906538 4886 patch_prober.go:28] interesting pod/console-6dd97696d9-fc9t4 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:15 crc kubenswrapper[4886]: I0219 22:16:15.906571 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" podUID="6d5c4d2c-ca63-4cba-9ea5-fba7281716b4" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.26:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:15 crc kubenswrapper[4886]: I0219 22:16:15.906587 4886 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-pnltp container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.26:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:15 crc kubenswrapper[4886]: I0219 22:16:15.906538 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-8nnq6" podUID="4f34ced6-828e-4337-8aad-b2ce35c35793" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:15 crc kubenswrapper[4886]: I0219 22:16:15.906615 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" podUID="6d5c4d2c-ca63-4cba-9ea5-fba7281716b4" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.26:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:15 crc kubenswrapper[4886]: I0219 22:16:15.906595 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-6dd97696d9-fc9t4" podUID="19543fcd-426f-4e08-91d1-02e568aa31d8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:15 crc kubenswrapper[4886]: I0219 22:16:15.947619 4886 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-hn4jw container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.20:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:15 crc kubenswrapper[4886]: I0219 22:16:15.947703 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" podUID="80fe938a-32f9-4742-ab1d-d1fafa082776" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.20:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:16 crc kubenswrapper[4886]: I0219 22:16:16.110478 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m" podUID="e51c0ebd-f319-41ea-9f7e-17bca0f30b6c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:16 crc kubenswrapper[4886]: I0219 22:16:16.110661 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-qjcds" podUID="f21bc18b-845c-491a-8d27-4cdb035e26bc" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 22:16:16 crc kubenswrapper[4886]: I0219 22:16:16.276607 4886 patch_prober.go:28] interesting pod/loki-operator-controller-manager-b77f6dcd-4z22f container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:16 crc kubenswrapper[4886]: I0219 22:16:16.276675 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" podUID="9ae9d788-4b23-480d-be58-dedda686c24d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:16 crc kubenswrapper[4886]: I0219 22:16:16.449476 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-gc6d9" podUID="7474309f-a146-43e6-bd0d-03c678b50e92" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:16 crc kubenswrapper[4886]: I0219 22:16:16.645536 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" podUID="5fa1c852-1a91-4a17-9edf-42db1180c6a9" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.90:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:16 crc kubenswrapper[4886]: I0219 22:16:16.645779 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-78fbc88654-kmltj" podUID="997a5ddf-b07d-45c0-a843-a833e93596da" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:16 crc kubenswrapper[4886]: I0219 22:16:16.645996 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" podUID="5fa1c852-1a91-4a17-9edf-42db1180c6a9" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.90:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:16 crc kubenswrapper[4886]: I0219 22:16:16.649126 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" Feb 19 22:16:16 crc kubenswrapper[4886]: I0219 22:16:16.649219 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" Feb 19 22:16:16 crc kubenswrapper[4886]: I0219 22:16:16.657642 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="webhook-server" containerStatusID={"Type":"cri-o","ID":"3ed96a5bb1f771dee8ebad466fb2a164686f5dad38b1b022d610711705cc91f0"} pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" containerMessage="Container webhook-server failed liveness probe, will be restarted" Feb 19 22:16:16 crc kubenswrapper[4886]: I0219 22:16:16.658622 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" podUID="5fa1c852-1a91-4a17-9edf-42db1180c6a9" containerName="webhook-server" containerID="cri-o://3ed96a5bb1f771dee8ebad466fb2a164686f5dad38b1b022d610711705cc91f0" gracePeriod=2 Feb 19 22:16:17 crc kubenswrapper[4886]: I0219 22:16:17.181566 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-x88rq" podUID="81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:17 crc kubenswrapper[4886]: I0219 22:16:17.181607 4886 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:17 crc kubenswrapper[4886]: I0219 22:16:17.181963 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:17 crc kubenswrapper[4886]: I0219 22:16:17.265473 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-x88rq" podUID="81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:17 crc kubenswrapper[4886]: I0219 22:16:17.265496 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5n8d9" podUID="d9bf26b3-bce9-456e-a767-cd97c1160e4d" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:17 crc kubenswrapper[4886]: I0219 22:16:17.265541 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5n8d9" podUID="d9bf26b3-bce9-456e-a767-cd97c1160e4d" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:17 crc kubenswrapper[4886]: I0219 22:16:17.265496 4886 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-x88rq" podUID="81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:17 crc kubenswrapper[4886]: I0219 22:16:17.502884 4886 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:17 crc kubenswrapper[4886]: I0219 22:16:17.502952 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:17 crc kubenswrapper[4886]: I0219 22:16:17.691510 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" podUID="5fa1c852-1a91-4a17-9edf-42db1180c6a9" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.90:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:17 crc kubenswrapper[4886]: I0219 22:16:17.803033 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="a30f0477-38c1-4a41-a633-81628dbab75a" containerName="prometheus" probeResult="failure" output="command timed out" Feb 19 22:16:17 crc kubenswrapper[4886]: I0219 22:16:17.803033 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="a30f0477-38c1-4a41-a633-81628dbab75a" containerName="prometheus" probeResult="failure" output="command timed out" Feb 19 22:16:17 crc kubenswrapper[4886]: I0219 22:16:17.859522 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-69bbfbf88f-p4hpl" podUID="224de83c-e009-463c-8d59-f2bfa7cd41a5" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:17 crc kubenswrapper[4886]: I0219 22:16:17.859591 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-69bbfbf88f-p4hpl" podUID="224de83c-e009-463c-8d59-f2bfa7cd41a5" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:17 crc kubenswrapper[4886]: I0219 22:16:17.890023 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-qjcds" podUID="f21bc18b-845c-491a-8d27-4cdb035e26bc" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 22:16:18 crc kubenswrapper[4886]: I0219 22:16:18.039670 4886 patch_prober.go:28] interesting pod/nmstate-webhook-866bcb46dc-6zf9p container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.85:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:18 crc kubenswrapper[4886]: I0219 22:16:18.039758 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6zf9p" podUID="bd3b67b3-9bd9-4b93-bc64-57be5e285a4f" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.85:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:18 crc kubenswrapper[4886]: I0219 22:16:18.135148 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="5fd4db44-9fcc-4954-9896-7f47be765647" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.167:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:18 crc kubenswrapper[4886]: I0219 22:16:18.135306 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="5fd4db44-9fcc-4954-9896-7f47be765647" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.167:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:18 crc kubenswrapper[4886]: I0219 22:16:18.353844 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="a36103a2-c43e-47d3-ace0-5a41849c2a86" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.6:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:18 crc kubenswrapper[4886]: I0219 22:16:18.742503 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-kkxcp" podUID="6ae01887-56db-45d4-bf3e-0e66a8b3fed8" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:18 crc kubenswrapper[4886]: I0219 22:16:18.742498 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-kkxcp" podUID="6ae01887-56db-45d4-bf3e-0e66a8b3fed8" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:18 crc kubenswrapper[4886]: I0219 22:16:18.801683 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="1e98987f-2584-4d4c-ae5e-7fd6bdb947d5" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.045966 4886 patch_prober.go:28] interesting pod/logging-loki-gateway-54d798b65b-ncwth container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.046336 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" podUID="0749f3c7-3653-4491-a8b4-3327797bb266" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.046039 4886 patch_prober.go:28] interesting pod/logging-loki-gateway-54d798b65b-ncwth container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.046386 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" podUID="0749f3c7-3653-4491-a8b4-3327797bb266" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.103951 4886 patch_prober.go:28] interesting pod/metrics-server-799cc74bc-wv5f6 container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.77:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.104020 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" podUID="a55e7bbd-33a0-46c7-b08b-bf71421bd1bf" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.77:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.104078 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.108235 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="metrics-server" containerStatusID={"Type":"cri-o","ID":"fc38f9b1ac58fcaea104c158b4f031beac2385497d750e96b62357f29fd370f0"} pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" containerMessage="Container metrics-server failed liveness probe, will be restarted" Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.114729 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" podUID="a55e7bbd-33a0-46c7-b08b-bf71421bd1bf" containerName="metrics-server" containerID="cri-o://fc38f9b1ac58fcaea104c158b4f031beac2385497d750e96b62357f29fd370f0" gracePeriod=170 Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.168229 4886 patch_prober.go:28] interesting pod/logging-loki-gateway-54d798b65b-r4kj2 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.168277 4886 patch_prober.go:28] interesting pod/logging-loki-gateway-54d798b65b-r4kj2 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.168315 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" podUID="ae5b30e8-ead2-44ca-bfdd-8e28b23ef040" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.168333 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" podUID="ae5b30e8-ead2-44ca-bfdd-8e28b23ef040" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.551850 4886 patch_prober.go:28] interesting pod/monitoring-plugin-6b66dd58b-2rt7q container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.78:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.551919 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-6b66dd58b-2rt7q" podUID="773be00e-ec8f-4ad1-b356-5d80fda75835" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.78:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.552015 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-6b66dd58b-2rt7q" Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.675480 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-6d62b" podUID="ef72a766-0d85-430a-ab0d-f0eda86f582f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.797471 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-fvkbm" podUID="3c83b560-7a5e-4c22-82dd-63e1a422e0d6" containerName="registry-server" probeResult="failure" output="command timed out" Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.797621 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-fvkbm" Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.799395 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-fvkbm" podUID="3c83b560-7a5e-4c22-82dd-63e1a422e0d6" containerName="registry-server" probeResult="failure" output="command timed out" Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.799451 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-index-fvkbm" Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.799880 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"59e81e7ab1a953cd66e6d2862dd31be6b0f8e33f3325a79563904857dafd06e5"} pod="openstack-operators/openstack-operator-index-fvkbm" containerMessage="Container registry-server failed liveness probe, will be restarted" Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.799931 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-fvkbm" podUID="3c83b560-7a5e-4c22-82dd-63e1a422e0d6" containerName="registry-server" containerID="cri-o://59e81e7ab1a953cd66e6d2862dd31be6b0f8e33f3325a79563904857dafd06e5" gracePeriod=30 Feb 19 22:16:19 crc kubenswrapper[4886]: E0219 22:16:19.808388 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="59e81e7ab1a953cd66e6d2862dd31be6b0f8e33f3325a79563904857dafd06e5" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 22:16:19 crc kubenswrapper[4886]: E0219 22:16:19.810531 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="59e81e7ab1a953cd66e6d2862dd31be6b0f8e33f3325a79563904857dafd06e5" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 22:16:19 crc kubenswrapper[4886]: E0219 22:16:19.812037 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="59e81e7ab1a953cd66e6d2862dd31be6b0f8e33f3325a79563904857dafd06e5" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 22:16:19 crc kubenswrapper[4886]: E0219 22:16:19.812101 4886 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack-operators/openstack-operator-index-fvkbm" podUID="3c83b560-7a5e-4c22-82dd-63e1a422e0d6" containerName="registry-server" Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.868214 4886 patch_prober.go:28] interesting pod/oauth-openshift-79b5c48459-6tqsd container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.868282 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" podUID="1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.868324 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.868921 4886 patch_prober.go:28] interesting pod/oauth-openshift-79b5c48459-6tqsd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.868950 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" podUID="1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.869010 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 22:16:19 crc kubenswrapper[4886]: I0219 22:16:19.869774 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-openshift" containerStatusID={"Type":"cri-o","ID":"90893d6f3ab89232e436d6eb9a88fa498ca207e72b0039fa3f7c39d14f34d981"} pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" containerMessage="Container oauth-openshift failed liveness probe, will be restarted" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.215750 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" event={"ID":"5fa1c852-1a91-4a17-9edf-42db1180c6a9","Type":"ContainerDied","Data":"3ed96a5bb1f771dee8ebad466fb2a164686f5dad38b1b022d610711705cc91f0"} Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.219179 4886 generic.go:334] "Generic (PLEG): container finished" podID="5fa1c852-1a91-4a17-9edf-42db1180c6a9" containerID="3ed96a5bb1f771dee8ebad466fb2a164686f5dad38b1b022d610711705cc91f0" exitCode=137 Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.302473 4886 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.302755 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.350583 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" podUID="dbea0765-f1be-4f22-a192-686a73112963" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.545446 4886 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-x82g8 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.545493 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" podUID="d79537f2-b8d8-4f6f-8c38-65701d8c1c77" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.545540 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.546869 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"d628486841c930d70ac0ea808568e989d3ebe7dc1deeb8b3432ac9c1ff883006"} pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.546921 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" podUID="d79537f2-b8d8-4f6f-8c38-65701d8c1c77" containerName="authentication-operator" containerID="cri-o://d628486841c930d70ac0ea808568e989d3ebe7dc1deeb8b3432ac9c1ff883006" gracePeriod=30 Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.552445 4886 patch_prober.go:28] interesting pod/monitoring-plugin-6b66dd58b-2rt7q container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.78:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.553295 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-6b66dd58b-2rt7q" podUID="773be00e-ec8f-4ad1-b356-5d80fda75835" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.78:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.568647 4886 patch_prober.go:28] interesting pod/console-operator-58897d9998-77m65 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.568709 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-77m65" podUID="5d6ed5aa-d2e5-4622-8506-4ef0502af8c2" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.568818 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-77m65" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.568876 4886 patch_prober.go:28] interesting pod/console-operator-58897d9998-77m65 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.568950 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-77m65" podUID="5d6ed5aa-d2e5-4622-8506-4ef0502af8c2" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.569471 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-77m65" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.569857 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"58c368dab609f56b60b359eef9a7977b93dc89c3f28ec1c50df2606fe949ea1d"} pod="openshift-console-operator/console-operator-58897d9998-77m65" containerMessage="Container console-operator failed liveness probe, will be restarted" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.569894 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-77m65" podUID="5d6ed5aa-d2e5-4622-8506-4ef0502af8c2" containerName="console-operator" containerID="cri-o://58c368dab609f56b60b359eef9a7977b93dc89c3f28ec1c50df2606fe949ea1d" gracePeriod=30 Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.678473 4886 patch_prober.go:28] interesting pod/downloads-7954f5f757-6lvw2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.678531 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6lvw2" podUID="b77d3d20-a193-4bf3-a448-e48059491a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.678620 4886 patch_prober.go:28] interesting pod/downloads-7954f5f757-6lvw2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.678685 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6lvw2" podUID="b77d3d20-a193-4bf3-a448-e48059491a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.686177 4886 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.686300 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.786443 4886 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-l7kln container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.786711 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" podUID="762d12a4-6d88-4715-923e-916dfc4ecad3" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.786835 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.786442 4886 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54js container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.787023 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" podUID="dbb72093-2c40-4a92-b1bf-18d8175fb1c8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.786482 4886 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-l7kln container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.787118 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" podUID="762d12a4-6d88-4715-923e-916dfc4ecad3" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.787147 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.786501 4886 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54js container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.787176 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" podUID="dbb72093-2c40-4a92-b1bf-18d8175fb1c8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.787184 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.787213 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.796508 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"0a10435e9325dfbab5af57e0a2eef876b5876904c17b9d7fb25deec0364e6084"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" containerMessage="Container packageserver failed liveness probe, will be restarted" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.796604 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" podUID="dbb72093-2c40-4a92-b1bf-18d8175fb1c8" containerName="packageserver" containerID="cri-o://0a10435e9325dfbab5af57e0a2eef876b5876904c17b9d7fb25deec0364e6084" gracePeriod=30 Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.796647 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="package-server-manager" containerStatusID={"Type":"cri-o","ID":"9916cade98533177c7c201fd250ae0e30e134818d9c29b9ecef93a8d3bd60ff5"} pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" containerMessage="Container package-server-manager failed liveness probe, will be restarted" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.796707 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" podUID="762d12a4-6d88-4715-923e-916dfc4ecad3" containerName="package-server-manager" containerID="cri-o://9916cade98533177c7c201fd250ae0e30e134818d9c29b9ecef93a8d3bd60ff5" gracePeriod=30 Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.801063 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3" containerName="galera" probeResult="failure" output="command timed out" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.801143 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.802693 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"50a2f7f892456319182e9497ded428e0d67041d805774f65bc09a05b69d0f018"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.803400 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3" containerName="galera" probeResult="failure" output="command timed out" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.803703 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.843829 4886 patch_prober.go:28] interesting pod/thanos-querier-bc79bc97-87qbv container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.75:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:20 crc kubenswrapper[4886]: I0219 22:16:20.843901 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-bc79bc97-87qbv" podUID="e0c1d73e-0ad8-46cc-afbc-d19899896bdd" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.75:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.034574 4886 patch_prober.go:28] interesting pod/router-default-5444994796-2wz7p container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.034659 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-2wz7p" podUID="6b171410-3c97-4589-b62d-2190a13cbb3e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.034736 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.034814 4886 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-t8hr5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.034842 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" podUID="0fd9b417-e431-4afc-b6a7-5c269fa04171" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.034870 4886 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-t8hr5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.034928 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" podUID="0fd9b417-e431-4afc-b6a7-5c269fa04171" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.035096 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.035129 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.035189 4886 patch_prober.go:28] interesting pod/router-default-5444994796-2wz7p container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.035238 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-2wz7p" podUID="6b171410-3c97-4589-b62d-2190a13cbb3e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.035312 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.036452 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="olm-operator" containerStatusID={"Type":"cri-o","ID":"22154de7fce099ae817eab468db21013de0cc98e06ec40d2357e1c7ccad7f6c5"} pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" containerMessage="Container olm-operator failed liveness probe, will be restarted" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.036492 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" podUID="0fd9b417-e431-4afc-b6a7-5c269fa04171" containerName="olm-operator" containerID="cri-o://22154de7fce099ae817eab468db21013de0cc98e06ec40d2357e1c7ccad7f6c5" gracePeriod=30 Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.037187 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"77d5bb2375829a1ba73d0ec9c623fa93c14ac465ea228b73ecc32cedd015fe81"} pod="openshift-ingress/router-default-5444994796-2wz7p" containerMessage="Container router failed liveness probe, will be restarted" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.037224 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5444994796-2wz7p" podUID="6b171410-3c97-4589-b62d-2190a13cbb3e" containerName="router" containerID="cri-o://77d5bb2375829a1ba73d0ec9c623fa93c14ac465ea228b73ecc32cedd015fe81" gracePeriod=10 Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.045883 4886 patch_prober.go:28] interesting pod/controller-manager-677cd87946-f626n container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.045928 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-677cd87946-f626n" podUID="aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.045966 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-677cd87946-f626n" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.047201 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller-manager" containerStatusID={"Type":"cri-o","ID":"d371f61a178e26c17bce16ae393bd4c85644e36e71fa8993630153e17f52375d"} pod="openshift-controller-manager/controller-manager-677cd87946-f626n" containerMessage="Container controller-manager failed liveness probe, will be restarted" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.047233 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-677cd87946-f626n" podUID="aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5" containerName="controller-manager" containerID="cri-o://d371f61a178e26c17bce16ae393bd4c85644e36e71fa8993630153e17f52375d" gracePeriod=30 Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.047359 4886 patch_prober.go:28] interesting pod/controller-manager-677cd87946-f626n container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.047375 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-677cd87946-f626n" podUID="aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.067929 4886 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-s659c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.067956 4886 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-s659c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.067986 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" podUID="22d58380-a045-498b-aa0e-d07a603210ff" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.068022 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.068006 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" podUID="22d58380-a045-498b-aa0e-d07a603210ff" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.069427 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="catalog-operator" containerStatusID={"Type":"cri-o","ID":"c9e4b54c2f873bb79b1ae8bd3024248610ab45c427de5e01d40e1750f68f458b"} pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" containerMessage="Container catalog-operator failed liveness probe, will be restarted" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.069468 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" podUID="22d58380-a045-498b-aa0e-d07a603210ff" containerName="catalog-operator" containerID="cri-o://c9e4b54c2f873bb79b1ae8bd3024248610ab45c427de5e01d40e1750f68f458b" gracePeriod=30 Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.544975 4886 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-hvf8k container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.66:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.545423 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" podUID="d6e47170-8742-401a-86bd-967c3fc623be" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.66:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.544978 4886 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-hvf8k container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.66:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.545622 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-hvf8k" podUID="d6e47170-8742-401a-86bd-967c3fc623be" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.66:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.569466 4886 patch_prober.go:28] interesting pod/console-operator-58897d9998-77m65 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.569564 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-77m65" podUID="5d6ed5aa-d2e5-4622-8506-4ef0502af8c2" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.785289 4886 patch_prober.go:28] interesting pod/router-default-5444994796-2wz7p container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-http ok Feb 19 22:16:21 crc kubenswrapper[4886]: [+]has-synced ok Feb 19 22:16:21 crc kubenswrapper[4886]: [-]process-running failed: reason withheld Feb 19 22:16:21 crc kubenswrapper[4886]: healthz check failed Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.785374 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-2wz7p" podUID="6b171410-3c97-4589-b62d-2190a13cbb3e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.796979 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-xrdmp" podUID="be8c5521-0d01-4afe-9a3e-7ee2ba8d014c" containerName="registry-server" probeResult="failure" output="command timed out" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.797198 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-99mqn" podUID="eebd68ba-8223-48ca-ad2a-fdc786eedad2" containerName="registry-server" probeResult="failure" output="command timed out" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.797004 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-xrdmp" podUID="be8c5521-0d01-4afe-9a3e-7ee2ba8d014c" containerName="registry-server" probeResult="failure" output="command timed out" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.797314 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-99mqn" podUID="eebd68ba-8223-48ca-ad2a-fdc786eedad2" containerName="registry-server" probeResult="failure" output="command timed out" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.798007 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="aeb6523b-7fed-4c9a-87c2-b531f22c9a1c" containerName="galera" probeResult="failure" output="command timed out" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.798016 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="aeb6523b-7fed-4c9a-87c2-b531f22c9a1c" containerName="galera" probeResult="failure" output="command timed out" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.798048 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.798107 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.801108 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"bcb786f467a824c7f4b6dd3308fde21e580488251a47a7f990bf6aec87333211"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.829462 4886 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54js container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.829523 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" podUID="dbb72093-2c40-4a92-b1bf-18d8175fb1c8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.829505 4886 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-l7kln container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.829671 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" podUID="762d12a4-6d88-4715-923e-916dfc4ecad3" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.877409 4886 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-txj4p container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.877460 4886 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-txj4p container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.877487 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" podUID="adc7f4d1-4f6b-4a8a-843e-119a248a1e17" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.877515 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" podUID="adc7f4d1-4f6b-4a8a-843e-119a248a1e17" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.877572 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.877597 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.884159 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus-operator-admission-webhook" containerStatusID={"Type":"cri-o","ID":"00e9543f388337de040f8037d1d90e02b98d02a17892a15941af030c83dea11f"} pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" containerMessage="Container prometheus-operator-admission-webhook failed liveness probe, will be restarted" Feb 19 22:16:21 crc kubenswrapper[4886]: I0219 22:16:21.884236 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" podUID="adc7f4d1-4f6b-4a8a-843e-119a248a1e17" containerName="prometheus-operator-admission-webhook" containerID="cri-o://00e9543f388337de040f8037d1d90e02b98d02a17892a15941af030c83dea11f" gracePeriod=30 Feb 19 22:16:22 crc kubenswrapper[4886]: I0219 22:16:22.036592 4886 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-t8hr5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:22 crc kubenswrapper[4886]: I0219 22:16:22.037324 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" podUID="0fd9b417-e431-4afc-b6a7-5c269fa04171" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:22 crc kubenswrapper[4886]: I0219 22:16:22.099505 4886 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-x88rq" podUID="81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:22 crc kubenswrapper[4886]: I0219 22:16:22.707987 4886 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-kg2xv container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:22 crc kubenswrapper[4886]: I0219 22:16:22.708059 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg2xv" podUID="a66fafd0-fa91-4368-8262-88fc7ef86dfa" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:22 crc kubenswrapper[4886]: I0219 22:16:22.793367 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="aeb6523b-7fed-4c9a-87c2-b531f22c9a1c" containerName="galera" probeResult="failure" output="command timed out" Feb 19 22:16:22 crc kubenswrapper[4886]: I0219 22:16:22.878535 4886 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-txj4p container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:22 crc kubenswrapper[4886]: I0219 22:16:22.878588 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" podUID="adc7f4d1-4f6b-4a8a-843e-119a248a1e17" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.017814 4886 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-rrspr container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.017874 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-rrspr" podUID="7ace2275-5b80-431f-8fda-ca350848bc07" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.135526 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="5fd4db44-9fcc-4954-9896-7f47be765647" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.167:9090/-/healthy\": context deadline exceeded" Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.135689 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="5fd4db44-9fcc-4954-9896-7f47be765647" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.167:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.135874 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.208739 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-xxqfx" podUID="c92403c4-f071-44a9-a322-4e849ae93c8c" containerName="registry-server" probeResult="failure" output=< Feb 19 22:16:23 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 22:16:23 crc kubenswrapper[4886]: > Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.208762 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-wnld7" podUID="989aa724-d476-4df1-9849-22c3acf90103" containerName="registry-server" probeResult="failure" output=< Feb 19 22:16:23 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 22:16:23 crc kubenswrapper[4886]: > Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.208861 4886 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-n662q container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.208919 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-n662q" podUID="cc980d53-db1b-43e3-9922-ea78f89031d2" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.214162 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-wnld7" podUID="989aa724-d476-4df1-9849-22c3acf90103" containerName="registry-server" probeResult="failure" output=< Feb 19 22:16:23 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 22:16:23 crc kubenswrapper[4886]: > Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.260094 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" event={"ID":"5fa1c852-1a91-4a17-9edf-42db1180c6a9","Type":"ContainerStarted","Data":"889822d96a206adf114c92d755a9f588cc2b0ee9bec5fa0b8e6450333a58ffa1"} Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.260395 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.270561 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-77m65_5d6ed5aa-d2e5-4622-8506-4ef0502af8c2/console-operator/0.log" Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.270630 4886 generic.go:334] "Generic (PLEG): container finished" podID="5d6ed5aa-d2e5-4622-8506-4ef0502af8c2" containerID="58c368dab609f56b60b359eef9a7977b93dc89c3f28ec1c50df2606fe949ea1d" exitCode=1 Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.270664 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-77m65" event={"ID":"5d6ed5aa-d2e5-4622-8506-4ef0502af8c2","Type":"ContainerDied","Data":"58c368dab609f56b60b359eef9a7977b93dc89c3f28ec1c50df2606fe949ea1d"} Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.274899 4886 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-bx48p container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.274966 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-bx48p" podUID="83c174ac-6edf-4973-b8d0-dc71b548f1c9" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.673854 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-7866795846-znmcl" podUID="7d34c65a-18e8-4709-856b-232ceae77630" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": dial tcp 10.217.0.121:8081: connect: connection refused" Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.760585 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-r2k2r" podUID="a87f6938-30e5-4481-ba31-246084feaa8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.760638 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ggvsr" podUID="d5e2840a-8a17-4ddd-92e5-d033222d3dee" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.794657 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="a30f0477-38c1-4a41-a633-81628dbab75a" containerName="prometheus" probeResult="failure" output="command timed out" Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.794781 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.796242 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="1e98987f-2584-4d4c-ae5e-7fd6bdb947d5" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.796369 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.797026 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="a30f0477-38c1-4a41-a633-81628dbab75a" containerName="prometheus" probeResult="failure" output="command timed out" Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.801005 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-xxqfx" podUID="c92403c4-f071-44a9-a322-4e849ae93c8c" containerName="registry-server" probeResult="failure" output=< Feb 19 22:16:23 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 22:16:23 crc kubenswrapper[4886]: > Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.801480 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"bd67cce4e6dbfdc1339dd425114735eac488c1569337e3c9697ec44a066cd4ee"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.801632 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e98987f-2584-4d4c-ae5e-7fd6bdb947d5" containerName="ceilometer-central-agent" containerID="cri-o://bd67cce4e6dbfdc1339dd425114735eac488c1569337e3c9697ec44a066cd4ee" gracePeriod=30 Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.882480 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dcg9l" podUID="06cb83ff-29f5-438f-87b0-32bb5899552d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.882517 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-dm5th" podUID="6b93dd73-4b64-418b-aa60-511213b8f1fd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:23 crc kubenswrapper[4886]: I0219 22:16:23.894887 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.002350 4886 trace.go:236] Trace[1748077755]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (19-Feb-2026 22:16:22.248) (total time: 1751ms): Feb 19 22:16:24 crc kubenswrapper[4886]: Trace[1748077755]: [1.751320699s] [1.751320699s] END Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.002348 4886 trace.go:236] Trace[234905670]: "Calculate volume metrics of ovndbcluster-sb-etc-ovn for pod openstack/ovsdbserver-sb-0" (19-Feb-2026 22:16:19.605) (total time: 4394ms): Feb 19 22:16:24 crc kubenswrapper[4886]: Trace[234905670]: [4.394524335s] [4.394524335s] END Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.002364 4886 trace.go:236] Trace[1950333980]: "Calculate volume metrics of glance for pod openstack/glance-default-external-api-0" (19-Feb-2026 22:16:15.735) (total time: 8264ms): Feb 19 22:16:24 crc kubenswrapper[4886]: Trace[1950333980]: [8.264305767s] [8.264305767s] END Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.016462 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qtwh6" podUID="4906db46-96ab-4aac-8aac-ba1532087aa2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.045577 4886 patch_prober.go:28] interesting pod/logging-loki-gateway-54d798b65b-ncwth container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.045642 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" podUID="0749f3c7-3653-4491-a8b4-3327797bb266" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.057512 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-mvk6x" podUID="b6a90270-aa6d-4792-96bc-333bff7f15df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.057585 4886 patch_prober.go:28] interesting pod/logging-loki-gateway-54d798b65b-ncwth container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.057664 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54d798b65b-ncwth" podUID="0749f3c7-3653-4491-a8b4-3327797bb266" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.094610 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-l59mx" podUID="3d51e481-ad0d-4d45-b0ee-7ce02b1c428d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.169523 4886 patch_prober.go:28] interesting pod/logging-loki-gateway-54d798b65b-r4kj2 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.169597 4886 patch_prober.go:28] interesting pod/logging-loki-gateway-54d798b65b-r4kj2 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.169613 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" podUID="ae5b30e8-ead2-44ca-bfdd-8e28b23ef040" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.169662 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54d798b65b-r4kj2" podUID="ae5b30e8-ead2-44ca-bfdd-8e28b23ef040" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.287236 4886 generic.go:334] "Generic (PLEG): container finished" podID="0fd9b417-e431-4afc-b6a7-5c269fa04171" containerID="22154de7fce099ae817eab468db21013de0cc98e06ec40d2357e1c7ccad7f6c5" exitCode=0 Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.287317 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" event={"ID":"0fd9b417-e431-4afc-b6a7-5c269fa04171","Type":"ContainerDied","Data":"22154de7fce099ae817eab468db21013de0cc98e06ec40d2357e1c7ccad7f6c5"} Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.289783 4886 generic.go:334] "Generic (PLEG): container finished" podID="7d34c65a-18e8-4709-856b-232ceae77630" containerID="5cac61a4eadecbb94d8af04f897c727b83b2b7fa7c78f626be3c1a58546672ed" exitCode=1 Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.289869 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-znmcl" event={"ID":"7d34c65a-18e8-4709-856b-232ceae77630","Type":"ContainerDied","Data":"5cac61a4eadecbb94d8af04f897c727b83b2b7fa7c78f626be3c1a58546672ed"} Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.292817 4886 generic.go:334] "Generic (PLEG): container finished" podID="22d58380-a045-498b-aa0e-d07a603210ff" containerID="c9e4b54c2f873bb79b1ae8bd3024248610ab45c427de5e01d40e1750f68f458b" exitCode=0 Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.292916 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" event={"ID":"22d58380-a045-498b-aa0e-d07a603210ff","Type":"ContainerDied","Data":"c9e4b54c2f873bb79b1ae8bd3024248610ab45c427de5e01d40e1750f68f458b"} Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.299439 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6kxv6" podUID="14b2ecba-fa5a-41f5-90d5-5085e30e277e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.299498 4886 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.299541 4886 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.299562 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="d1d2e425-8e15-48fc-a486-535954b89459" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.299556 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="b703c587-ef88-4e74-a6a5-c71a11625f76" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.58:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.299611 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2fmz6" podUID="96ff2cdf-fb2a-4544-a042-f17dcfc808c2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.301835 4886 scope.go:117] "RemoveContainer" containerID="5cac61a4eadecbb94d8af04f897c727b83b2b7fa7c78f626be3c1a58546672ed" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.340455 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-nj4ct" podUID="5c4a962c-02bb-48ff-9444-db393b42a9b0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.340459 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qd57q" podUID="78de7c34-f842-4938-8bbb-fef238359913" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.340495 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-s5rp2" podUID="56992c82-2769-4a27-ac41-864dda46aa88" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.438175 4886 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.438249 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="6ff2edc4-4f98-4d66-84f7-24a345741eec" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.519448 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-l2kk9" podUID="17769417-6658-4daf-8268-e92194198b5c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.519536 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wxjwn" podUID="16e0b754-3cd3-433d-80c6-11363689e9c3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.560524 4886 patch_prober.go:28] interesting pod/apiserver-76f77b778f-sk46q container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.560594 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-sk46q" podUID="8fc9ae95-b72d-42a3-943d-30c652843b61" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.643466 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-5576fd9fcc-hfcqz" podUID="5537cddb-9e8f-4097-9228-e741c3145b56" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.643501 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-5484b6858b-lrr24" podUID="67db5487-865b-4ce2-8ade-a87f6909b85d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.643701 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-5576fd9fcc-hfcqz" podUID="5537cddb-9e8f-4097-9228-e741c3145b56" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.726451 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-znrbn" podUID="3a08ee98-4149-4379-bb3d-e05dd76f5c8d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:24 crc kubenswrapper[4886]: E0219 22:16:24.839061 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="59e81e7ab1a953cd66e6d2862dd31be6b0f8e33f3325a79563904857dafd06e5" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 22:16:24 crc kubenswrapper[4886]: E0219 22:16:24.840739 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="59e81e7ab1a953cd66e6d2862dd31be6b0f8e33f3325a79563904857dafd06e5" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 22:16:24 crc kubenswrapper[4886]: E0219 22:16:24.842421 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="59e81e7ab1a953cd66e6d2862dd31be6b0f8e33f3325a79563904857dafd06e5" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 22:16:24 crc kubenswrapper[4886]: E0219 22:16:24.842452 4886 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack-operators/openstack-operator-index-fvkbm" podUID="3c83b560-7a5e-4c22-82dd-63e1a422e0d6" containerName="registry-server" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.921376 4886 patch_prober.go:28] interesting pod/route-controller-manager-7c7c79bc7d-qmtdf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.921452 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" podUID="2e90185c-2cd5-48d5-9e61-43020c0e21de" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.921521 4886 patch_prober.go:28] interesting pod/route-controller-manager-7c7c79bc7d-qmtdf container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.921537 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" podUID="2e90185c-2cd5-48d5-9e61-43020c0e21de" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.921568 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.922625 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"dd2646a7450d3125e062b69ad24dace478d7444cb86f6346d3180a1f34417c62"} pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Feb 19 22:16:24 crc kubenswrapper[4886]: I0219 22:16:24.922678 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" podUID="2e90185c-2cd5-48d5-9e61-43020c0e21de" containerName="route-controller-manager" containerID="cri-o://dd2646a7450d3125e062b69ad24dace478d7444cb86f6346d3180a1f34417c62" gracePeriod=30 Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.235035 4886 patch_prober.go:28] interesting pod/loki-operator-controller-manager-b77f6dcd-4z22f container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.48:8081/healthz\": dial tcp 10.217.0.48:8081: connect: connection refused" start-of-body= Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.235111 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" podUID="9ae9d788-4b23-480d-be58-dedda686c24d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/healthz\": dial tcp 10.217.0.48:8081: connect: connection refused" Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.235057 4886 patch_prober.go:28] interesting pod/loki-operator-controller-manager-b77f6dcd-4z22f container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": dial tcp 10.217.0.48:8081: connect: connection refused" start-of-body= Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.235440 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" podUID="9ae9d788-4b23-480d-be58-dedda686c24d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": dial tcp 10.217.0.48:8081: connect: connection refused" Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.317815 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-77m65_5d6ed5aa-d2e5-4622-8506-4ef0502af8c2/console-operator/0.log" Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.318030 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-77m65" event={"ID":"5d6ed5aa-d2e5-4622-8506-4ef0502af8c2","Type":"ContainerStarted","Data":"7438f4e1ecfbd9bc59a3801e96771606916c5d83f795dd731b571e908966f804"} Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.318415 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-77m65" Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.318745 4886 patch_prober.go:28] interesting pod/console-operator-58897d9998-77m65 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.318800 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-77m65" podUID="5d6ed5aa-d2e5-4622-8506-4ef0502af8c2" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.321528 4886 generic.go:334] "Generic (PLEG): container finished" podID="dbb72093-2c40-4a92-b1bf-18d8175fb1c8" containerID="0a10435e9325dfbab5af57e0a2eef876b5876904c17b9d7fb25deec0364e6084" exitCode=0 Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.321589 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" event={"ID":"dbb72093-2c40-4a92-b1bf-18d8175fb1c8","Type":"ContainerDied","Data":"0a10435e9325dfbab5af57e0a2eef876b5876904c17b9d7fb25deec0364e6084"} Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.325357 4886 generic.go:334] "Generic (PLEG): container finished" podID="9ae9d788-4b23-480d-be58-dedda686c24d" containerID="d2ede7a0f5aa4b85a5ecc5fe01052d03b307b76b85ebc0b50d8bb0736d1ed81f" exitCode=1 Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.325422 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" event={"ID":"9ae9d788-4b23-480d-be58-dedda686c24d","Type":"ContainerDied","Data":"d2ede7a0f5aa4b85a5ecc5fe01052d03b307b76b85ebc0b50d8bb0736d1ed81f"} Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.326682 4886 scope.go:117] "RemoveContainer" containerID="d2ede7a0f5aa4b85a5ecc5fe01052d03b307b76b85ebc0b50d8bb0736d1ed81f" Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.328186 4886 generic.go:334] "Generic (PLEG): container finished" podID="fe935d54-8e74-4df9-a450-19df5d20b568" containerID="a03290c2214ba04329b6a7c15a737cef23de21827fa862693a50dd117be5db18" exitCode=1 Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.328298 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-2cpst" event={"ID":"fe935d54-8e74-4df9-a450-19df5d20b568","Type":"ContainerDied","Data":"a03290c2214ba04329b6a7c15a737cef23de21827fa862693a50dd117be5db18"} Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.329740 4886 scope.go:117] "RemoveContainer" containerID="a03290c2214ba04329b6a7c15a737cef23de21827fa862693a50dd117be5db18" Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.331959 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" event={"ID":"22d58380-a045-498b-aa0e-d07a603210ff","Type":"ContainerStarted","Data":"bd02d1161ff59851d0933eb35ffbe0c1d72bfa4d79386bc56490895f2a20a520"} Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.332726 4886 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-s659c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.332777 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" podUID="22d58380-a045-498b-aa0e-d07a603210ff" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.333707 4886 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" containerID="cri-o://c9e4b54c2f873bb79b1ae8bd3024248610ab45c427de5e01d40e1750f68f458b" Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.333727 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.714243 4886 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": EOF" start-of-body= Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.714566 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": EOF" Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.898489 4886 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8nnq6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.898546 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8nnq6" podUID="4f34ced6-828e-4337-8aad-b2ce35c35793" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.898489 4886 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-pnltp container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.26:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.898608 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" podUID="6d5c4d2c-ca63-4cba-9ea5-fba7281716b4" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.26:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.898677 4886 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8nnq6 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.898723 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-8nnq6" podUID="4f34ced6-828e-4337-8aad-b2ce35c35793" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.898780 4886 patch_prober.go:28] interesting pod/console-6dd97696d9-fc9t4 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.898815 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-6dd97696d9-fc9t4" podUID="19543fcd-426f-4e08-91d1-02e568aa31d8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.898825 4886 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-pnltp container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.26:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.898851 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-pnltp" podUID="6d5c4d2c-ca63-4cba-9ea5-fba7281716b4" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.26:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.980640 4886 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-hn4jw container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.20:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.980776 4886 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-hn4jw container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.20:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.980835 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" podUID="80fe938a-32f9-4742-ab1d-d1fafa082776" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.20:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:25 crc kubenswrapper[4886]: I0219 22:16:25.980914 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-hn4jw" podUID="80fe938a-32f9-4742-ab1d-d1fafa082776" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.20:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.306044 4886 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.306369 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.353457 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" event={"ID":"9ae9d788-4b23-480d-be58-dedda686c24d","Type":"ContainerStarted","Data":"1b304ba7599ca8a617affe391d39a5df455cd82369ae5f6bfcebb45038ecad67"} Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.354388 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.360730 4886 generic.go:334] "Generic (PLEG): container finished" podID="d79537f2-b8d8-4f6f-8c38-65701d8c1c77" containerID="d628486841c930d70ac0ea808568e989d3ebe7dc1deeb8b3432ac9c1ff883006" exitCode=0 Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.360896 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" event={"ID":"d79537f2-b8d8-4f6f-8c38-65701d8c1c77","Type":"ContainerDied","Data":"d628486841c930d70ac0ea808568e989d3ebe7dc1deeb8b3432ac9c1ff883006"} Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.360974 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-x82g8" event={"ID":"d79537f2-b8d8-4f6f-8c38-65701d8c1c77","Type":"ContainerStarted","Data":"0d40bdfc2afd29e543671e60d3476eaafe4639e630a0fb84c0a9f3c90823c9d0"} Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.363944 4886 generic.go:334] "Generic (PLEG): container finished" podID="3c83b560-7a5e-4c22-82dd-63e1a422e0d6" containerID="59e81e7ab1a953cd66e6d2862dd31be6b0f8e33f3325a79563904857dafd06e5" exitCode=0 Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.364248 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fvkbm" event={"ID":"3c83b560-7a5e-4c22-82dd-63e1a422e0d6","Type":"ContainerDied","Data":"59e81e7ab1a953cd66e6d2862dd31be6b0f8e33f3325a79563904857dafd06e5"} Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.367041 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-2cpst" event={"ID":"fe935d54-8e74-4df9-a450-19df5d20b568","Type":"ContainerStarted","Data":"2b9634272ad226d6b6c3a9fe47bd221cea4f5258c32a0bb891c778dbb3da80e7"} Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.368247 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-2cpst" Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.372599 4886 generic.go:334] "Generic (PLEG): container finished" podID="3d51e481-ad0d-4d45-b0ee-7ce02b1c428d" containerID="de977314e77a83f55494bedf60ffdc05cfd43102e27304c1afd7089235c04107" exitCode=1 Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.372688 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-l59mx" event={"ID":"3d51e481-ad0d-4d45-b0ee-7ce02b1c428d","Type":"ContainerDied","Data":"de977314e77a83f55494bedf60ffdc05cfd43102e27304c1afd7089235c04107"} Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.373612 4886 scope.go:117] "RemoveContainer" containerID="de977314e77a83f55494bedf60ffdc05cfd43102e27304c1afd7089235c04107" Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.376157 4886 generic.go:334] "Generic (PLEG): container finished" podID="dbea0765-f1be-4f22-a192-686a73112963" containerID="fe5ddc33b66e4cf895c0165b1f918749cf05ba1a2818bcf6fe0d03d3e9d6e046" exitCode=1 Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.376221 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" event={"ID":"dbea0765-f1be-4f22-a192-686a73112963","Type":"ContainerDied","Data":"fe5ddc33b66e4cf895c0165b1f918749cf05ba1a2818bcf6fe0d03d3e9d6e046"} Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.376712 4886 scope.go:117] "RemoveContainer" containerID="fe5ddc33b66e4cf895c0165b1f918749cf05ba1a2818bcf6fe0d03d3e9d6e046" Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.378636 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" event={"ID":"dbb72093-2c40-4a92-b1bf-18d8175fb1c8","Type":"ContainerStarted","Data":"ff4fc92e7c61ba227e91a5d2b50b9743c0992d6bd313d81aaa81aa4eee4a5d72"} Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.378687 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.379241 4886 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54js container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.379392 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" podUID="dbb72093-2c40-4a92-b1bf-18d8175fb1c8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.381241 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-znmcl" event={"ID":"7d34c65a-18e8-4709-856b-232ceae77630","Type":"ContainerStarted","Data":"565b0f9fe4d701e5c18d5c84a59be1d34b83dd7e65639b2ee4ceefe69c27bb36"} Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.381479 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-znmcl" Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.397402 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.405703 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.405750 4886 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a89738ee3e7bc021fddcc3ca5eea8eb3f8f9511340b9e6dc038bf72db46dd4af" exitCode=1 Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.405963 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a89738ee3e7bc021fddcc3ca5eea8eb3f8f9511340b9e6dc038bf72db46dd4af"} Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.406003 4886 scope.go:117] "RemoveContainer" containerID="4de23ed6c234b5fb7bba35f6dc5c2010c384b596de6d99ca048bf39896d154e3" Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.408139 4886 scope.go:117] "RemoveContainer" containerID="a89738ee3e7bc021fddcc3ca5eea8eb3f8f9511340b9e6dc038bf72db46dd4af" Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.415212 4886 generic.go:334] "Generic (PLEG): container finished" podID="adc7f4d1-4f6b-4a8a-843e-119a248a1e17" containerID="00e9543f388337de040f8037d1d90e02b98d02a17892a15941af030c83dea11f" exitCode=0 Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.415375 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" event={"ID":"adc7f4d1-4f6b-4a8a-843e-119a248a1e17","Type":"ContainerDied","Data":"00e9543f388337de040f8037d1d90e02b98d02a17892a15941af030c83dea11f"} Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.418841 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" event={"ID":"0fd9b417-e431-4afc-b6a7-5c269fa04171","Type":"ContainerStarted","Data":"31010edbb88859c8744eb2774e05209b0c37d6ada87522266773baa260b6122e"} Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.419534 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.419721 4886 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-t8hr5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.419856 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" podUID="0fd9b417-e431-4afc-b6a7-5c269fa04171" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.423044 4886 generic.go:334] "Generic (PLEG): container finished" podID="aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5" containerID="d371f61a178e26c17bce16ae393bd4c85644e36e71fa8993630153e17f52375d" exitCode=0 Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.423093 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-677cd87946-f626n" event={"ID":"aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5","Type":"ContainerDied","Data":"d371f61a178e26c17bce16ae393bd4c85644e36e71fa8993630153e17f52375d"} Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.439587 4886 generic.go:334] "Generic (PLEG): container finished" podID="762d12a4-6d88-4715-923e-916dfc4ecad3" containerID="9916cade98533177c7c201fd250ae0e30e134818d9c29b9ecef93a8d3bd60ff5" exitCode=0 Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.439683 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" event={"ID":"762d12a4-6d88-4715-923e-916dfc4ecad3","Type":"ContainerDied","Data":"9916cade98533177c7c201fd250ae0e30e134818d9c29b9ecef93a8d3bd60ff5"} Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.439729 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" event={"ID":"762d12a4-6d88-4715-923e-916dfc4ecad3","Type":"ContainerStarted","Data":"ad42449e5c0708a64f023b9fdac50723bc818e0870dac8880c934fc2e8733c05"} Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.440172 4886 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-l7kln container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.440207 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" podUID="762d12a4-6d88-4715-923e-916dfc4ecad3" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.441111 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.441497 4886 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-s659c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.441529 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" podUID="22d58380-a045-498b-aa0e-d07a603210ff" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.441636 4886 patch_prober.go:28] interesting pod/console-operator-58897d9998-77m65 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.441668 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-77m65" podUID="5d6ed5aa-d2e5-4622-8506-4ef0502af8c2" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 19 22:16:26 crc kubenswrapper[4886]: I0219 22:16:26.697149 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.100499 4886 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-x88rq" podUID="81d4d6dc-36bc-4a5a-b1dc-a1416e85e1d5" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.350570 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.400009 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-xrdmp" podUID="be8c5521-0d01-4afe-9a3e-7ee2ba8d014c" containerName="registry-server" probeResult="failure" output=< Feb 19 22:16:27 crc kubenswrapper[4886]: timeout: health rpc did not complete within 1s Feb 19 22:16:27 crc kubenswrapper[4886]: > Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.400139 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xrdmp" Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.420375 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-xrdmp" podUID="be8c5521-0d01-4afe-9a3e-7ee2ba8d014c" containerName="registry-server" probeResult="failure" output=< Feb 19 22:16:27 crc kubenswrapper[4886]: timeout: health rpc did not complete within 1s Feb 19 22:16:27 crc kubenswrapper[4886]: > Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.420430 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xrdmp" Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.452312 4886 generic.go:334] "Generic (PLEG): container finished" podID="e51c0ebd-f319-41ea-9f7e-17bca0f30b6c" containerID="cfc8ef043b2682f8db9a2219482a3c2a98044d58de0bffd6072974fc701f655d" exitCode=1 Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.452382 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m" event={"ID":"e51c0ebd-f319-41ea-9f7e-17bca0f30b6c","Type":"ContainerDied","Data":"cfc8ef043b2682f8db9a2219482a3c2a98044d58de0bffd6072974fc701f655d"} Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.453471 4886 scope.go:117] "RemoveContainer" containerID="cfc8ef043b2682f8db9a2219482a3c2a98044d58de0bffd6072974fc701f655d" Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.458151 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fvkbm" event={"ID":"3c83b560-7a5e-4c22-82dd-63e1a422e0d6","Type":"ContainerStarted","Data":"842cce6ebc8c123a79cc22ced01a2486cadbfb170c16d4ac6d482a2dea0fd067"} Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.462510 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.463566 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5fcff124ee16bbc4de9ebed7e59f5d302560e07e2cd2ed0e7e005909317463c8"} Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.466202 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" event={"ID":"adc7f4d1-4f6b-4a8a-843e-119a248a1e17","Type":"ContainerStarted","Data":"b99fb34f5950847f1e42b6cfc0e03345f57268db0507c7aace9ab9424cd2214c"} Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.466586 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.470411 4886 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-txj4p container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.470509 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" podUID="adc7f4d1-4f6b-4a8a-843e-119a248a1e17" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.472758 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-l59mx" event={"ID":"3d51e481-ad0d-4d45-b0ee-7ce02b1c428d","Type":"ContainerStarted","Data":"dc5c2cddf24fe5fe653c6c72ee65c08d73ead3d9d9846dd896667241683d0f09"} Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.473087 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-l59mx" Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.475136 4886 generic.go:334] "Generic (PLEG): container finished" podID="2e90185c-2cd5-48d5-9e61-43020c0e21de" containerID="dd2646a7450d3125e062b69ad24dace478d7444cb86f6346d3180a1f34417c62" exitCode=0 Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.475197 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" event={"ID":"2e90185c-2cd5-48d5-9e61-43020c0e21de","Type":"ContainerDied","Data":"dd2646a7450d3125e062b69ad24dace478d7444cb86f6346d3180a1f34417c62"} Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.475218 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" event={"ID":"2e90185c-2cd5-48d5-9e61-43020c0e21de","Type":"ContainerStarted","Data":"d588196ae9a92ef7aa2318dcb41b446411951bcd4af600e6b5d03ad5732cee03"} Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.475805 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.476053 4886 patch_prober.go:28] interesting pod/route-controller-manager-7c7c79bc7d-qmtdf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.476194 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" podUID="2e90185c-2cd5-48d5-9e61-43020c0e21de" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.490314 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xrdmp" Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.494462 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" event={"ID":"dbea0765-f1be-4f22-a192-686a73112963","Type":"ContainerStarted","Data":"99c617a58bd0204c89901787370fa1523eee082918bd8ddc909c19e483aa328f"} Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.495546 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.504974 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"85c333de9ec3f6dfb8dd674d1272ea6031bba34533b8dcbd3d09558ab2ba80b5"} pod="openshift-marketplace/redhat-marketplace-xrdmp" containerMessage="Container registry-server failed liveness probe, will be restarted" Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.505293 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xrdmp" podUID="be8c5521-0d01-4afe-9a3e-7ee2ba8d014c" containerName="registry-server" containerID="cri-o://85c333de9ec3f6dfb8dd674d1272ea6031bba34533b8dcbd3d09558ab2ba80b5" gracePeriod=30 Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.506305 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-677cd87946-f626n" event={"ID":"aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5","Type":"ContainerStarted","Data":"80bf7b9159c49d3ddeaa803356e143e9564f4b1468cdab85103db103b0858e3d"} Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.506342 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-677cd87946-f626n" Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.507528 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.507727 4886 patch_prober.go:28] interesting pod/controller-manager-677cd87946-f626n container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.507753 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-677cd87946-f626n" podUID="aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.514971 4886 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54js container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.515033 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" podUID="dbb72093-2c40-4a92-b1bf-18d8175fb1c8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.515149 4886 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-t8hr5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.515171 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" podUID="0fd9b417-e431-4afc-b6a7-5c269fa04171" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.515773 4886 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-s659c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Feb 19 22:16:27 crc kubenswrapper[4886]: I0219 22:16:27.515846 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" podUID="22d58380-a045-498b-aa0e-d07a603210ff" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Feb 19 22:16:28 crc kubenswrapper[4886]: E0219 22:16:28.160095 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe8c5521_0d01_4afe_9a3e_7ee2ba8d014c.slice/crio-85c333de9ec3f6dfb8dd674d1272ea6031bba34533b8dcbd3d09558ab2ba80b5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe8c5521_0d01_4afe_9a3e_7ee2ba8d014c.slice/crio-conmon-85c333de9ec3f6dfb8dd674d1272ea6031bba34533b8dcbd3d09558ab2ba80b5.scope\": RecentStats: unable to find data in memory cache]" Feb 19 22:16:28 crc kubenswrapper[4886]: I0219 22:16:28.525111 4886 generic.go:334] "Generic (PLEG): container finished" podID="be8c5521-0d01-4afe-9a3e-7ee2ba8d014c" containerID="85c333de9ec3f6dfb8dd674d1272ea6031bba34533b8dcbd3d09558ab2ba80b5" exitCode=0 Feb 19 22:16:28 crc kubenswrapper[4886]: I0219 22:16:28.525157 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrdmp" event={"ID":"be8c5521-0d01-4afe-9a3e-7ee2ba8d014c","Type":"ContainerDied","Data":"85c333de9ec3f6dfb8dd674d1272ea6031bba34533b8dcbd3d09558ab2ba80b5"} Feb 19 22:16:28 crc kubenswrapper[4886]: I0219 22:16:28.532360 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m" event={"ID":"e51c0ebd-f319-41ea-9f7e-17bca0f30b6c","Type":"ContainerStarted","Data":"9d2828ab2225a60378dc0206686bc80db9a5dc4317d0b595a484e1b3cfa80146"} Feb 19 22:16:28 crc kubenswrapper[4886]: I0219 22:16:28.532868 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m" Feb 19 22:16:28 crc kubenswrapper[4886]: I0219 22:16:28.533036 4886 patch_prober.go:28] interesting pod/controller-manager-677cd87946-f626n container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Feb 19 22:16:28 crc kubenswrapper[4886]: I0219 22:16:28.533179 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-677cd87946-f626n" podUID="aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Feb 19 22:16:28 crc kubenswrapper[4886]: I0219 22:16:28.533203 4886 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-txj4p container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Feb 19 22:16:28 crc kubenswrapper[4886]: I0219 22:16:28.533399 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" podUID="adc7f4d1-4f6b-4a8a-843e-119a248a1e17" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Feb 19 22:16:28 crc kubenswrapper[4886]: I0219 22:16:28.533218 4886 patch_prober.go:28] interesting pod/route-controller-manager-7c7c79bc7d-qmtdf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Feb 19 22:16:28 crc kubenswrapper[4886]: I0219 22:16:28.533532 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" podUID="2e90185c-2cd5-48d5-9e61-43020c0e21de" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Feb 19 22:16:28 crc kubenswrapper[4886]: I0219 22:16:28.631129 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="aeb6523b-7fed-4c9a-87c2-b531f22c9a1c" containerName="galera" containerID="cri-o://bcb786f467a824c7f4b6dd3308fde21e580488251a47a7f990bf6aec87333211" gracePeriod=24 Feb 19 22:16:28 crc kubenswrapper[4886]: I0219 22:16:28.644238 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3" containerName="galera" containerID="cri-o://50a2f7f892456319182e9497ded428e0d67041d805774f65bc09a05b69d0f018" gracePeriod=23 Feb 19 22:16:28 crc kubenswrapper[4886]: I0219 22:16:28.701442 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-6b66dd58b-2rt7q" Feb 19 22:16:28 crc kubenswrapper[4886]: I0219 22:16:28.871977 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 22:16:29 crc kubenswrapper[4886]: I0219 22:16:29.544526 4886 generic.go:334] "Generic (PLEG): container finished" podID="1e98987f-2584-4d4c-ae5e-7fd6bdb947d5" containerID="bd67cce4e6dbfdc1339dd425114735eac488c1569337e3c9697ec44a066cd4ee" exitCode=0 Feb 19 22:16:29 crc kubenswrapper[4886]: I0219 22:16:29.544591 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5","Type":"ContainerDied","Data":"bd67cce4e6dbfdc1339dd425114735eac488c1569337e3c9697ec44a066cd4ee"} Feb 19 22:16:29 crc kubenswrapper[4886]: I0219 22:16:29.549576 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrdmp" event={"ID":"be8c5521-0d01-4afe-9a3e-7ee2ba8d014c","Type":"ContainerStarted","Data":"91a21538388210d75263b1b843d15a5aa28bf0b7868f4fdae72f2b7287d16e08"} Feb 19 22:16:29 crc kubenswrapper[4886]: I0219 22:16:29.549762 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m" Feb 19 22:16:29 crc kubenswrapper[4886]: I0219 22:16:29.567231 4886 patch_prober.go:28] interesting pod/console-operator-58897d9998-77m65 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 19 22:16:29 crc kubenswrapper[4886]: I0219 22:16:29.567299 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-77m65" podUID="5d6ed5aa-d2e5-4622-8506-4ef0502af8c2" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 19 22:16:29 crc kubenswrapper[4886]: I0219 22:16:29.567467 4886 patch_prober.go:28] interesting pod/console-operator-58897d9998-77m65 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 19 22:16:29 crc kubenswrapper[4886]: I0219 22:16:29.567517 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-77m65" podUID="5d6ed5aa-d2e5-4622-8506-4ef0502af8c2" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 19 22:16:29 crc kubenswrapper[4886]: E0219 22:16:29.636807 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50a2f7f892456319182e9497ded428e0d67041d805774f65bc09a05b69d0f018" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 19 22:16:29 crc kubenswrapper[4886]: E0219 22:16:29.638093 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50a2f7f892456319182e9497ded428e0d67041d805774f65bc09a05b69d0f018" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 19 22:16:29 crc kubenswrapper[4886]: E0219 22:16:29.641340 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50a2f7f892456319182e9497ded428e0d67041d805774f65bc09a05b69d0f018" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 19 22:16:29 crc kubenswrapper[4886]: E0219 22:16:29.641376 4886 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3" containerName="galera" Feb 19 22:16:29 crc kubenswrapper[4886]: I0219 22:16:29.742958 4886 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54js container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Feb 19 22:16:29 crc kubenswrapper[4886]: I0219 22:16:29.743007 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" podUID="dbb72093-2c40-4a92-b1bf-18d8175fb1c8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Feb 19 22:16:29 crc kubenswrapper[4886]: I0219 22:16:29.743079 4886 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54js container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Feb 19 22:16:29 crc kubenswrapper[4886]: I0219 22:16:29.743106 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" podUID="dbb72093-2c40-4a92-b1bf-18d8175fb1c8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Feb 19 22:16:29 crc kubenswrapper[4886]: I0219 22:16:29.996221 4886 patch_prober.go:28] interesting pod/router-default-5444994796-2wz7p container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-http ok Feb 19 22:16:29 crc kubenswrapper[4886]: [+]has-synced ok Feb 19 22:16:29 crc kubenswrapper[4886]: [-]process-running failed: reason withheld Feb 19 22:16:29 crc kubenswrapper[4886]: healthz check failed Feb 19 22:16:29 crc kubenswrapper[4886]: I0219 22:16:29.996599 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-2wz7p" podUID="6b171410-3c97-4589-b62d-2190a13cbb3e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 22:16:30 crc kubenswrapper[4886]: I0219 22:16:30.029742 4886 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-t8hr5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Feb 19 22:16:30 crc kubenswrapper[4886]: I0219 22:16:30.029813 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" podUID="0fd9b417-e431-4afc-b6a7-5c269fa04171" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Feb 19 22:16:30 crc kubenswrapper[4886]: I0219 22:16:30.029766 4886 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-t8hr5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Feb 19 22:16:30 crc kubenswrapper[4886]: I0219 22:16:30.029885 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" podUID="0fd9b417-e431-4afc-b6a7-5c269fa04171" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Feb 19 22:16:30 crc kubenswrapper[4886]: I0219 22:16:30.044577 4886 patch_prober.go:28] interesting pod/controller-manager-677cd87946-f626n container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Feb 19 22:16:30 crc kubenswrapper[4886]: I0219 22:16:30.044637 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-677cd87946-f626n" podUID="aa9a3b27-05fc-44f1-9a62-6f4e6b0441f5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Feb 19 22:16:30 crc kubenswrapper[4886]: I0219 22:16:30.067750 4886 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-s659c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Feb 19 22:16:30 crc kubenswrapper[4886]: I0219 22:16:30.067799 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" podUID="22d58380-a045-498b-aa0e-d07a603210ff" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Feb 19 22:16:30 crc kubenswrapper[4886]: I0219 22:16:30.067842 4886 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-s659c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Feb 19 22:16:30 crc kubenswrapper[4886]: I0219 22:16:30.067882 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" podUID="22d58380-a045-498b-aa0e-d07a603210ff" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Feb 19 22:16:30 crc kubenswrapper[4886]: I0219 22:16:30.404340 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="6c1275e6-3c64-49dc-9aa2-308cda6e4772" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 22:16:30 crc kubenswrapper[4886]: I0219 22:16:30.567965 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e98987f-2584-4d4c-ae5e-7fd6bdb947d5","Type":"ContainerStarted","Data":"89e782bae0657d5940082d18bbc73f2855f8acfa69bd4d6da239475a4b792502"} Feb 19 22:16:30 crc kubenswrapper[4886]: E0219 22:16:30.814621 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bcb786f467a824c7f4b6dd3308fde21e580488251a47a7f990bf6aec87333211" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 19 22:16:30 crc kubenswrapper[4886]: E0219 22:16:30.816004 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bcb786f467a824c7f4b6dd3308fde21e580488251a47a7f990bf6aec87333211" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 19 22:16:30 crc kubenswrapper[4886]: E0219 22:16:30.817449 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bcb786f467a824c7f4b6dd3308fde21e580488251a47a7f990bf6aec87333211" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 19 22:16:30 crc kubenswrapper[4886]: E0219 22:16:30.817486 4886 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="aeb6523b-7fed-4c9a-87c2-b531f22c9a1c" containerName="galera" Feb 19 22:16:30 crc kubenswrapper[4886]: I0219 22:16:30.897595 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-txj4p" Feb 19 22:16:31 crc kubenswrapper[4886]: I0219 22:16:31.299235 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-x88rq" Feb 19 22:16:31 crc kubenswrapper[4886]: I0219 22:16:31.581429 4886 generic.go:334] "Generic (PLEG): container finished" podID="aeb6523b-7fed-4c9a-87c2-b531f22c9a1c" containerID="bcb786f467a824c7f4b6dd3308fde21e580488251a47a7f990bf6aec87333211" exitCode=0 Feb 19 22:16:31 crc kubenswrapper[4886]: I0219 22:16:31.581506 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c","Type":"ContainerDied","Data":"bcb786f467a824c7f4b6dd3308fde21e580488251a47a7f990bf6aec87333211"} Feb 19 22:16:31 crc kubenswrapper[4886]: I0219 22:16:31.591438 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-2wz7p_6b171410-3c97-4589-b62d-2190a13cbb3e/router/0.log" Feb 19 22:16:31 crc kubenswrapper[4886]: I0219 22:16:31.591499 4886 generic.go:334] "Generic (PLEG): container finished" podID="6b171410-3c97-4589-b62d-2190a13cbb3e" containerID="77d5bb2375829a1ba73d0ec9c623fa93c14ac465ea228b73ecc32cedd015fe81" exitCode=137 Feb 19 22:16:31 crc kubenswrapper[4886]: I0219 22:16:31.591624 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2wz7p" event={"ID":"6b171410-3c97-4589-b62d-2190a13cbb3e","Type":"ContainerDied","Data":"77d5bb2375829a1ba73d0ec9c623fa93c14ac465ea228b73ecc32cedd015fe81"} Feb 19 22:16:32 crc kubenswrapper[4886]: I0219 22:16:32.652785 4886 generic.go:334] "Generic (PLEG): container finished" podID="a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3" containerID="50a2f7f892456319182e9497ded428e0d67041d805774f65bc09a05b69d0f018" exitCode=0 Feb 19 22:16:32 crc kubenswrapper[4886]: I0219 22:16:32.653251 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3","Type":"ContainerDied","Data":"50a2f7f892456319182e9497ded428e0d67041d805774f65bc09a05b69d0f018"} Feb 19 22:16:33 crc kubenswrapper[4886]: I0219 22:16:33.089965 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="6c1275e6-3c64-49dc-9aa2-308cda6e4772" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 22:16:33 crc kubenswrapper[4886]: I0219 22:16:33.094901 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-l59mx" Feb 19 22:16:33 crc kubenswrapper[4886]: I0219 22:16:33.247515 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-2cpst" Feb 19 22:16:33 crc kubenswrapper[4886]: I0219 22:16:33.679768 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-znmcl" Feb 19 22:16:33 crc kubenswrapper[4886]: I0219 22:16:33.691156 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"aeb6523b-7fed-4c9a-87c2-b531f22c9a1c","Type":"ContainerStarted","Data":"7ff77e609d7e10b0e70a60a6a8b2e98d9652127244f2dd4e6632bd48f2cd4570"} Feb 19 22:16:33 crc kubenswrapper[4886]: I0219 22:16:33.696752 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-2wz7p_6b171410-3c97-4589-b62d-2190a13cbb3e/router/0.log" Feb 19 22:16:33 crc kubenswrapper[4886]: I0219 22:16:33.697105 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2wz7p" event={"ID":"6b171410-3c97-4589-b62d-2190a13cbb3e","Type":"ContainerStarted","Data":"8a5500dc8bbee797f58f72a937bdd6fb5c9f4c1b3c6bef8f55effa91e970ac4f"} Feb 19 22:16:33 crc kubenswrapper[4886]: I0219 22:16:33.699686 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a80ab4bf-a8a5-46c1-8d1b-08e3bb253ae3","Type":"ContainerStarted","Data":"4e14a09f6b587bfa10680bfdd23942388da81bf652ae9f87a3c241202bc93586"} Feb 19 22:16:33 crc kubenswrapper[4886]: I0219 22:16:33.938981 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c7c79bc7d-qmtdf" Feb 19 22:16:33 crc kubenswrapper[4886]: I0219 22:16:33.952965 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 22:16:33 crc kubenswrapper[4886]: I0219 22:16:33.966226 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 22:16:33 crc kubenswrapper[4886]: I0219 22:16:33.982338 4886 patch_prober.go:28] interesting pod/router-default-5444994796-2wz7p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 22:16:33 crc kubenswrapper[4886]: [+]has-synced ok Feb 19 22:16:33 crc kubenswrapper[4886]: [+]process-running ok Feb 19 22:16:33 crc kubenswrapper[4886]: healthz check failed Feb 19 22:16:33 crc kubenswrapper[4886]: I0219 22:16:33.982399 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2wz7p" podUID="6b171410-3c97-4589-b62d-2190a13cbb3e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 22:16:34 crc kubenswrapper[4886]: I0219 22:16:34.836346 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-fvkbm" Feb 19 22:16:34 crc kubenswrapper[4886]: I0219 22:16:34.836800 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-fvkbm" Feb 19 22:16:34 crc kubenswrapper[4886]: I0219 22:16:34.957947 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 22:16:34 crc kubenswrapper[4886]: I0219 22:16:34.984672 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-fvkbm" Feb 19 22:16:35 crc kubenswrapper[4886]: I0219 22:16:35.237019 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-b77f6dcd-4z22f" Feb 19 22:16:35 crc kubenswrapper[4886]: I0219 22:16:35.543494 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5d96cd6488-wwn64" Feb 19 22:16:35 crc kubenswrapper[4886]: I0219 22:16:35.721835 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 22:16:35 crc kubenswrapper[4886]: I0219 22:16:35.725185 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-2wz7p" Feb 19 22:16:35 crc kubenswrapper[4886]: I0219 22:16:35.764655 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-fvkbm" Feb 19 22:16:36 crc kubenswrapper[4886]: I0219 22:16:36.079498 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="6c1275e6-3c64-49dc-9aa2-308cda6e4772" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 22:16:36 crc kubenswrapper[4886]: I0219 22:16:36.079566 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 22:16:36 crc kubenswrapper[4886]: I0219 22:16:36.080853 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"d2b3990b6c91d3065337867bc53cc174c5b1d6fb9ea442e213cdf3039f807277"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Feb 19 22:16:36 crc kubenswrapper[4886]: I0219 22:16:36.080921 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6c1275e6-3c64-49dc-9aa2-308cda6e4772" containerName="cinder-scheduler" containerID="cri-o://d2b3990b6c91d3065337867bc53cc174c5b1d6fb9ea442e213cdf3039f807277" gracePeriod=30 Feb 19 22:16:36 crc kubenswrapper[4886]: I0219 22:16:36.122941 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xrdmp" Feb 19 22:16:36 crc kubenswrapper[4886]: I0219 22:16:36.123000 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xrdmp" Feb 19 22:16:37 crc kubenswrapper[4886]: I0219 22:16:37.243595 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-xrdmp" podUID="be8c5521-0d01-4afe-9a3e-7ee2ba8d014c" containerName="registry-server" probeResult="failure" output=< Feb 19 22:16:37 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 22:16:37 crc kubenswrapper[4886]: > Feb 19 22:16:37 crc kubenswrapper[4886]: I0219 22:16:37.350959 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 22:16:37 crc kubenswrapper[4886]: I0219 22:16:37.351225 4886 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 22:16:37 crc kubenswrapper[4886]: I0219 22:16:37.351317 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 22:16:38 crc kubenswrapper[4886]: I0219 22:16:38.755889 4886 generic.go:334] "Generic (PLEG): container finished" podID="6c1275e6-3c64-49dc-9aa2-308cda6e4772" containerID="d2b3990b6c91d3065337867bc53cc174c5b1d6fb9ea442e213cdf3039f807277" exitCode=0 Feb 19 22:16:38 crc kubenswrapper[4886]: I0219 22:16:38.755927 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6c1275e6-3c64-49dc-9aa2-308cda6e4772","Type":"ContainerDied","Data":"d2b3990b6c91d3065337867bc53cc174c5b1d6fb9ea442e213cdf3039f807277"} Feb 19 22:16:39 crc kubenswrapper[4886]: I0219 22:16:39.316755 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cz55v5" Feb 19 22:16:39 crc kubenswrapper[4886]: I0219 22:16:39.575807 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-77m65" Feb 19 22:16:39 crc kubenswrapper[4886]: I0219 22:16:39.635800 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 19 22:16:39 crc kubenswrapper[4886]: I0219 22:16:39.636097 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 19 22:16:39 crc kubenswrapper[4886]: I0219 22:16:39.748916 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54js" Feb 19 22:16:39 crc kubenswrapper[4886]: I0219 22:16:39.769702 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6c1275e6-3c64-49dc-9aa2-308cda6e4772","Type":"ContainerStarted","Data":"68264b2bf7685b48302e397e71c3b9e78dca4267ae23bcf50dc9a86e4536be0f"} Feb 19 22:16:39 crc kubenswrapper[4886]: I0219 22:16:39.916332 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lh9xk"] Feb 19 22:16:39 crc kubenswrapper[4886]: E0219 22:16:39.927491 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8cf5b6-801b-42f8-814f-cac5311d2292" containerName="collect-profiles" Feb 19 22:16:39 crc kubenswrapper[4886]: I0219 22:16:39.927836 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8cf5b6-801b-42f8-814f-cac5311d2292" containerName="collect-profiles" Feb 19 22:16:39 crc kubenswrapper[4886]: I0219 22:16:39.928709 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="da8cf5b6-801b-42f8-814f-cac5311d2292" containerName="collect-profiles" Feb 19 22:16:39 crc kubenswrapper[4886]: I0219 22:16:39.933330 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lh9xk" Feb 19 22:16:39 crc kubenswrapper[4886]: I0219 22:16:39.957105 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lh9xk"] Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.021932 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/344485c9-a591-4ab9-80a0-bd456d422275-catalog-content\") pod \"redhat-marketplace-lh9xk\" (UID: \"344485c9-a591-4ab9-80a0-bd456d422275\") " pod="openshift-marketplace/redhat-marketplace-lh9xk" Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.022038 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9j2p\" (UniqueName: \"kubernetes.io/projected/344485c9-a591-4ab9-80a0-bd456d422275-kube-api-access-f9j2p\") pod \"redhat-marketplace-lh9xk\" (UID: \"344485c9-a591-4ab9-80a0-bd456d422275\") " pod="openshift-marketplace/redhat-marketplace-lh9xk" Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.022100 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/344485c9-a591-4ab9-80a0-bd456d422275-utilities\") pod \"redhat-marketplace-lh9xk\" (UID: \"344485c9-a591-4ab9-80a0-bd456d422275\") " pod="openshift-marketplace/redhat-marketplace-lh9xk" Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.033703 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t8hr5" Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.048133 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-677cd87946-f626n" Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.072034 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s659c" Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.124078 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/344485c9-a591-4ab9-80a0-bd456d422275-catalog-content\") pod \"redhat-marketplace-lh9xk\" (UID: \"344485c9-a591-4ab9-80a0-bd456d422275\") " pod="openshift-marketplace/redhat-marketplace-lh9xk" Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.124230 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9j2p\" (UniqueName: \"kubernetes.io/projected/344485c9-a591-4ab9-80a0-bd456d422275-kube-api-access-f9j2p\") pod \"redhat-marketplace-lh9xk\" (UID: \"344485c9-a591-4ab9-80a0-bd456d422275\") " pod="openshift-marketplace/redhat-marketplace-lh9xk" Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.124327 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/344485c9-a591-4ab9-80a0-bd456d422275-utilities\") pod \"redhat-marketplace-lh9xk\" (UID: \"344485c9-a591-4ab9-80a0-bd456d422275\") " pod="openshift-marketplace/redhat-marketplace-lh9xk" Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.125328 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/344485c9-a591-4ab9-80a0-bd456d422275-utilities\") pod \"redhat-marketplace-lh9xk\" (UID: \"344485c9-a591-4ab9-80a0-bd456d422275\") " pod="openshift-marketplace/redhat-marketplace-lh9xk" Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.125412 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/344485c9-a591-4ab9-80a0-bd456d422275-catalog-content\") pod \"redhat-marketplace-lh9xk\" (UID: \"344485c9-a591-4ab9-80a0-bd456d422275\") " pod="openshift-marketplace/redhat-marketplace-lh9xk" Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.156917 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9j2p\" (UniqueName: \"kubernetes.io/projected/344485c9-a591-4ab9-80a0-bd456d422275-kube-api-access-f9j2p\") pod \"redhat-marketplace-lh9xk\" (UID: \"344485c9-a591-4ab9-80a0-bd456d422275\") " pod="openshift-marketplace/redhat-marketplace-lh9xk" Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.259456 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lh9xk" Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.295743 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hq9np"] Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.298137 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hq9np" Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.308673 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hq9np"] Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.328372 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2cjk\" (UniqueName: \"kubernetes.io/projected/268ec8b6-20c9-426c-8ae3-6d912e1ebcdd-kube-api-access-s2cjk\") pod \"redhat-operators-hq9np\" (UID: \"268ec8b6-20c9-426c-8ae3-6d912e1ebcdd\") " pod="openshift-marketplace/redhat-operators-hq9np" Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.329564 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/268ec8b6-20c9-426c-8ae3-6d912e1ebcdd-utilities\") pod \"redhat-operators-hq9np\" (UID: \"268ec8b6-20c9-426c-8ae3-6d912e1ebcdd\") " pod="openshift-marketplace/redhat-operators-hq9np" Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.329733 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/268ec8b6-20c9-426c-8ae3-6d912e1ebcdd-catalog-content\") pod \"redhat-operators-hq9np\" (UID: \"268ec8b6-20c9-426c-8ae3-6d912e1ebcdd\") " pod="openshift-marketplace/redhat-operators-hq9np" Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.433674 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2cjk\" (UniqueName: \"kubernetes.io/projected/268ec8b6-20c9-426c-8ae3-6d912e1ebcdd-kube-api-access-s2cjk\") pod \"redhat-operators-hq9np\" (UID: \"268ec8b6-20c9-426c-8ae3-6d912e1ebcdd\") " pod="openshift-marketplace/redhat-operators-hq9np" Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.433752 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/268ec8b6-20c9-426c-8ae3-6d912e1ebcdd-utilities\") pod \"redhat-operators-hq9np\" (UID: \"268ec8b6-20c9-426c-8ae3-6d912e1ebcdd\") " pod="openshift-marketplace/redhat-operators-hq9np" Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.433813 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/268ec8b6-20c9-426c-8ae3-6d912e1ebcdd-catalog-content\") pod \"redhat-operators-hq9np\" (UID: \"268ec8b6-20c9-426c-8ae3-6d912e1ebcdd\") " pod="openshift-marketplace/redhat-operators-hq9np" Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.434422 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/268ec8b6-20c9-426c-8ae3-6d912e1ebcdd-catalog-content\") pod \"redhat-operators-hq9np\" (UID: \"268ec8b6-20c9-426c-8ae3-6d912e1ebcdd\") " pod="openshift-marketplace/redhat-operators-hq9np" Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.434501 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/268ec8b6-20c9-426c-8ae3-6d912e1ebcdd-utilities\") pod \"redhat-operators-hq9np\" (UID: \"268ec8b6-20c9-426c-8ae3-6d912e1ebcdd\") " pod="openshift-marketplace/redhat-operators-hq9np" Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.453662 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2cjk\" (UniqueName: \"kubernetes.io/projected/268ec8b6-20c9-426c-8ae3-6d912e1ebcdd-kube-api-access-s2cjk\") pod \"redhat-operators-hq9np\" (UID: \"268ec8b6-20c9-426c-8ae3-6d912e1ebcdd\") " pod="openshift-marketplace/redhat-operators-hq9np" Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.719814 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hq9np" Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.813116 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 19 22:16:40 crc kubenswrapper[4886]: I0219 22:16:40.813458 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 19 22:16:41 crc kubenswrapper[4886]: I0219 22:16:41.069831 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 22:16:41 crc kubenswrapper[4886]: I0219 22:16:41.805236 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lh9xk"] Feb 19 22:16:41 crc kubenswrapper[4886]: I0219 22:16:41.817078 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hq9np"] Feb 19 22:16:41 crc kubenswrapper[4886]: W0219 22:16:41.819473 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod268ec8b6_20c9_426c_8ae3_6d912e1ebcdd.slice/crio-5aa79af6b521fbef50f017961684623929bb4ee4d6224a6c525e68ee7f36d55f WatchSource:0}: Error finding container 5aa79af6b521fbef50f017961684623929bb4ee4d6224a6c525e68ee7f36d55f: Status 404 returned error can't find the container with id 5aa79af6b521fbef50f017961684623929bb4ee4d6224a6c525e68ee7f36d55f Feb 19 22:16:42 crc kubenswrapper[4886]: I0219 22:16:42.805779 4886 generic.go:334] "Generic (PLEG): container finished" podID="268ec8b6-20c9-426c-8ae3-6d912e1ebcdd" containerID="176d47a6340ce25733db20becaa78c81a853a5f60cfec9f52689f1feabd94a4d" exitCode=0 Feb 19 22:16:42 crc kubenswrapper[4886]: I0219 22:16:42.805881 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq9np" event={"ID":"268ec8b6-20c9-426c-8ae3-6d912e1ebcdd","Type":"ContainerDied","Data":"176d47a6340ce25733db20becaa78c81a853a5f60cfec9f52689f1feabd94a4d"} Feb 19 22:16:42 crc kubenswrapper[4886]: I0219 22:16:42.806256 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq9np" event={"ID":"268ec8b6-20c9-426c-8ae3-6d912e1ebcdd","Type":"ContainerStarted","Data":"5aa79af6b521fbef50f017961684623929bb4ee4d6224a6c525e68ee7f36d55f"} Feb 19 22:16:42 crc kubenswrapper[4886]: I0219 22:16:42.809601 4886 generic.go:334] "Generic (PLEG): container finished" podID="344485c9-a591-4ab9-80a0-bd456d422275" containerID="ab18c2f7501cb21889326e08adfc6fd9402f5542f6b1e31066e90e689b5ee150" exitCode=0 Feb 19 22:16:42 crc kubenswrapper[4886]: I0219 22:16:42.809639 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lh9xk" event={"ID":"344485c9-a591-4ab9-80a0-bd456d422275","Type":"ContainerDied","Data":"ab18c2f7501cb21889326e08adfc6fd9402f5542f6b1e31066e90e689b5ee150"} Feb 19 22:16:42 crc kubenswrapper[4886]: I0219 22:16:42.809665 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lh9xk" event={"ID":"344485c9-a591-4ab9-80a0-bd456d422275","Type":"ContainerStarted","Data":"43e8fc19f533f620c6b23e88859343e4e560aea75caf03dca97fc8a54a4aeea8"} Feb 19 22:16:43 crc kubenswrapper[4886]: I0219 22:16:43.825174 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq9np" event={"ID":"268ec8b6-20c9-426c-8ae3-6d912e1ebcdd","Type":"ContainerStarted","Data":"0986e65ee55159f6e8ac04ef269377d032dfea060241e1c7155b55994bbdd805"} Feb 19 22:16:43 crc kubenswrapper[4886]: I0219 22:16:43.827441 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lh9xk" event={"ID":"344485c9-a591-4ab9-80a0-bd456d422275","Type":"ContainerStarted","Data":"7aa48f64056a2a136058d71a0b4dcf43d408023f5d77784b263e4dc1ea3347fc"} Feb 19 22:16:44 crc kubenswrapper[4886]: I0219 22:16:44.839146 4886 generic.go:334] "Generic (PLEG): container finished" podID="344485c9-a591-4ab9-80a0-bd456d422275" containerID="7aa48f64056a2a136058d71a0b4dcf43d408023f5d77784b263e4dc1ea3347fc" exitCode=0 Feb 19 22:16:44 crc kubenswrapper[4886]: I0219 22:16:44.840820 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lh9xk" event={"ID":"344485c9-a591-4ab9-80a0-bd456d422275","Type":"ContainerDied","Data":"7aa48f64056a2a136058d71a0b4dcf43d408023f5d77784b263e4dc1ea3347fc"} Feb 19 22:16:45 crc kubenswrapper[4886]: I0219 22:16:45.853980 4886 generic.go:334] "Generic (PLEG): container finished" podID="e5a46475-fb7c-41ff-ba13-98139467fd86" containerID="60cd21aa06a4a03dea01de52976a4511f4863d11c5c53499deedf31afa59cf3b" exitCode=1 Feb 19 22:16:45 crc kubenswrapper[4886]: I0219 22:16:45.854294 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e5a46475-fb7c-41ff-ba13-98139467fd86","Type":"ContainerDied","Data":"60cd21aa06a4a03dea01de52976a4511f4863d11c5c53499deedf31afa59cf3b"} Feb 19 22:16:45 crc kubenswrapper[4886]: I0219 22:16:45.858035 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lh9xk" event={"ID":"344485c9-a591-4ab9-80a0-bd456d422275","Type":"ContainerStarted","Data":"b78ca39eece552cdbfebd8d2651f43a994719a69fce62c34247de5f90b96274a"} Feb 19 22:16:45 crc kubenswrapper[4886]: I0219 22:16:45.903550 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lh9xk" podStartSLOduration=4.486778895 podStartE2EDuration="6.902236056s" podCreationTimestamp="2026-02-19 22:16:39 +0000 UTC" firstStartedPulling="2026-02-19 22:16:42.811144025 +0000 UTC m=+4633.438987075" lastFinishedPulling="2026-02-19 22:16:45.226601186 +0000 UTC m=+4635.854444236" observedRunningTime="2026-02-19 22:16:45.897195321 +0000 UTC m=+4636.525038371" watchObservedRunningTime="2026-02-19 22:16:45.902236056 +0000 UTC m=+4636.530079096" Feb 19 22:16:46 crc kubenswrapper[4886]: I0219 22:16:46.081326 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="6c1275e6-3c64-49dc-9aa2-308cda6e4772" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 22:16:46 crc kubenswrapper[4886]: I0219 22:16:46.657683 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xrdmp" Feb 19 22:16:46 crc kubenswrapper[4886]: I0219 22:16:46.715641 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xrdmp" Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.096889 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" podUID="1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6" containerName="oauth-openshift" containerID="cri-o://90893d6f3ab89232e436d6eb9a88fa498ca207e72b0039fa3f7c39d14f34d981" gracePeriod=13 Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.351302 4886 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.351362 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.639678 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.802378 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e5a46475-fb7c-41ff-ba13-98139467fd86-openstack-config\") pod \"e5a46475-fb7c-41ff-ba13-98139467fd86\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.802456 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e5a46475-fb7c-41ff-ba13-98139467fd86-ca-certs\") pod \"e5a46475-fb7c-41ff-ba13-98139467fd86\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.802531 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5a46475-fb7c-41ff-ba13-98139467fd86-ssh-key\") pod \"e5a46475-fb7c-41ff-ba13-98139467fd86\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.802555 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e5a46475-fb7c-41ff-ba13-98139467fd86-openstack-config-secret\") pod \"e5a46475-fb7c-41ff-ba13-98139467fd86\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.802756 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8j4w\" (UniqueName: \"kubernetes.io/projected/e5a46475-fb7c-41ff-ba13-98139467fd86-kube-api-access-x8j4w\") pod \"e5a46475-fb7c-41ff-ba13-98139467fd86\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.802831 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e5a46475-fb7c-41ff-ba13-98139467fd86-test-operator-ephemeral-workdir\") pod \"e5a46475-fb7c-41ff-ba13-98139467fd86\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.802893 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"e5a46475-fb7c-41ff-ba13-98139467fd86\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.803013 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e5a46475-fb7c-41ff-ba13-98139467fd86-test-operator-ephemeral-temporary\") pod \"e5a46475-fb7c-41ff-ba13-98139467fd86\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.803075 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5a46475-fb7c-41ff-ba13-98139467fd86-config-data\") pod \"e5a46475-fb7c-41ff-ba13-98139467fd86\" (UID: \"e5a46475-fb7c-41ff-ba13-98139467fd86\") " Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.829817 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "test-operator-logs") pod "e5a46475-fb7c-41ff-ba13-98139467fd86" (UID: "e5a46475-fb7c-41ff-ba13-98139467fd86"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.840326 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5a46475-fb7c-41ff-ba13-98139467fd86-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "e5a46475-fb7c-41ff-ba13-98139467fd86" (UID: "e5a46475-fb7c-41ff-ba13-98139467fd86"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.846047 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5a46475-fb7c-41ff-ba13-98139467fd86-config-data" (OuterVolumeSpecName: "config-data") pod "e5a46475-fb7c-41ff-ba13-98139467fd86" (UID: "e5a46475-fb7c-41ff-ba13-98139467fd86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.848865 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5a46475-fb7c-41ff-ba13-98139467fd86-kube-api-access-x8j4w" (OuterVolumeSpecName: "kube-api-access-x8j4w") pod "e5a46475-fb7c-41ff-ba13-98139467fd86" (UID: "e5a46475-fb7c-41ff-ba13-98139467fd86"). InnerVolumeSpecName "kube-api-access-x8j4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.853118 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5a46475-fb7c-41ff-ba13-98139467fd86-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "e5a46475-fb7c-41ff-ba13-98139467fd86" (UID: "e5a46475-fb7c-41ff-ba13-98139467fd86"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.863335 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a46475-fb7c-41ff-ba13-98139467fd86-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "e5a46475-fb7c-41ff-ba13-98139467fd86" (UID: "e5a46475-fb7c-41ff-ba13-98139467fd86"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.882246 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a46475-fb7c-41ff-ba13-98139467fd86-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e5a46475-fb7c-41ff-ba13-98139467fd86" (UID: "e5a46475-fb7c-41ff-ba13-98139467fd86"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.888102 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5a46475-fb7c-41ff-ba13-98139467fd86-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "e5a46475-fb7c-41ff-ba13-98139467fd86" (UID: "e5a46475-fb7c-41ff-ba13-98139467fd86"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.894943 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e5a46475-fb7c-41ff-ba13-98139467fd86","Type":"ContainerDied","Data":"7d0b7a040fd842496ce8718b67f114d63cb8ed82ba1704595d7a17a7109b56c2"} Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.895101 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d0b7a040fd842496ce8718b67f114d63cb8ed82ba1704595d7a17a7109b56c2" Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.895364 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a46475-fb7c-41ff-ba13-98139467fd86-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "e5a46475-fb7c-41ff-ba13-98139467fd86" (UID: "e5a46475-fb7c-41ff-ba13-98139467fd86"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.903449 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.905823 4886 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e5a46475-fb7c-41ff-ba13-98139467fd86-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.905847 4886 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e5a46475-fb7c-41ff-ba13-98139467fd86-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.905857 4886 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e5a46475-fb7c-41ff-ba13-98139467fd86-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.905867 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5a46475-fb7c-41ff-ba13-98139467fd86-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.905876 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8j4w\" (UniqueName: \"kubernetes.io/projected/e5a46475-fb7c-41ff-ba13-98139467fd86-kube-api-access-x8j4w\") on node \"crc\" DevicePath \"\"" Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.905886 4886 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e5a46475-fb7c-41ff-ba13-98139467fd86-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.947571 4886 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.947615 4886 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e5a46475-fb7c-41ff-ba13-98139467fd86-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.947636 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5a46475-fb7c-41ff-ba13-98139467fd86-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 22:16:47 crc kubenswrapper[4886]: I0219 22:16:47.980422 4886 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 19 22:16:48 crc kubenswrapper[4886]: I0219 22:16:48.049451 4886 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 19 22:16:48 crc kubenswrapper[4886]: I0219 22:16:48.868054 4886 patch_prober.go:28] interesting pod/oauth-openshift-79b5c48459-6tqsd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" start-of-body= Feb 19 22:16:48 crc kubenswrapper[4886]: I0219 22:16:48.868409 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" podUID="1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" Feb 19 22:16:48 crc kubenswrapper[4886]: I0219 22:16:48.907461 4886 generic.go:334] "Generic (PLEG): container finished" podID="1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6" containerID="90893d6f3ab89232e436d6eb9a88fa498ca207e72b0039fa3f7c39d14f34d981" exitCode=0 Feb 19 22:16:48 crc kubenswrapper[4886]: I0219 22:16:48.907507 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" event={"ID":"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6","Type":"ContainerDied","Data":"90893d6f3ab89232e436d6eb9a88fa498ca207e72b0039fa3f7c39d14f34d981"} Feb 19 22:16:49 crc kubenswrapper[4886]: I0219 22:16:49.919558 4886 generic.go:334] "Generic (PLEG): container finished" podID="268ec8b6-20c9-426c-8ae3-6d912e1ebcdd" containerID="0986e65ee55159f6e8ac04ef269377d032dfea060241e1c7155b55994bbdd805" exitCode=0 Feb 19 22:16:49 crc kubenswrapper[4886]: I0219 22:16:49.919731 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq9np" event={"ID":"268ec8b6-20c9-426c-8ae3-6d912e1ebcdd","Type":"ContainerDied","Data":"0986e65ee55159f6e8ac04ef269377d032dfea060241e1c7155b55994bbdd805"} Feb 19 22:16:49 crc kubenswrapper[4886]: I0219 22:16:49.927382 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" event={"ID":"1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6","Type":"ContainerStarted","Data":"5f87da6cf15180159635fd598bb1348a09c3cdf528ac8277a3cc51f93bd3bb2b"} Feb 19 22:16:49 crc kubenswrapper[4886]: I0219 22:16:49.927964 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 22:16:49 crc kubenswrapper[4886]: I0219 22:16:49.928032 4886 patch_prober.go:28] interesting pod/oauth-openshift-79b5c48459-6tqsd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" start-of-body= Feb 19 22:16:49 crc kubenswrapper[4886]: I0219 22:16:49.928058 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" podUID="1d6a95a3-ba47-4bb6-9b69-7cbbe6acc5b6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" Feb 19 22:16:50 crc kubenswrapper[4886]: I0219 22:16:50.260481 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lh9xk" Feb 19 22:16:50 crc kubenswrapper[4886]: I0219 22:16:50.260553 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lh9xk" Feb 19 22:16:50 crc kubenswrapper[4886]: I0219 22:16:50.329576 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lh9xk" Feb 19 22:16:50 crc kubenswrapper[4886]: I0219 22:16:50.943166 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq9np" event={"ID":"268ec8b6-20c9-426c-8ae3-6d912e1ebcdd","Type":"ContainerStarted","Data":"8202f07f9c2b77b25addfc3dcc7210f5652b88c41e01220302e72560c43411f0"} Feb 19 22:16:50 crc kubenswrapper[4886]: I0219 22:16:50.960233 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-79b5c48459-6tqsd" Feb 19 22:16:50 crc kubenswrapper[4886]: I0219 22:16:50.965740 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hq9np" podStartSLOduration=3.332153278 podStartE2EDuration="10.965724095s" podCreationTimestamp="2026-02-19 22:16:40 +0000 UTC" firstStartedPulling="2026-02-19 22:16:42.807953726 +0000 UTC m=+4633.435796776" lastFinishedPulling="2026-02-19 22:16:50.441524543 +0000 UTC m=+4641.069367593" observedRunningTime="2026-02-19 22:16:50.961738056 +0000 UTC m=+4641.589581106" watchObservedRunningTime="2026-02-19 22:16:50.965724095 +0000 UTC m=+4641.593567145" Feb 19 22:16:51 crc kubenswrapper[4886]: I0219 22:16:51.003669 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lh9xk" Feb 19 22:16:51 crc kubenswrapper[4886]: I0219 22:16:51.073846 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="6c1275e6-3c64-49dc-9aa2-308cda6e4772" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 22:16:54 crc kubenswrapper[4886]: I0219 22:16:54.276062 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lh9xk"] Feb 19 22:16:54 crc kubenswrapper[4886]: I0219 22:16:54.276798 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lh9xk" podUID="344485c9-a591-4ab9-80a0-bd456d422275" containerName="registry-server" containerID="cri-o://b78ca39eece552cdbfebd8d2651f43a994719a69fce62c34247de5f90b96274a" gracePeriod=2 Feb 19 22:16:54 crc kubenswrapper[4886]: I0219 22:16:54.983115 4886 generic.go:334] "Generic (PLEG): container finished" podID="344485c9-a591-4ab9-80a0-bd456d422275" containerID="b78ca39eece552cdbfebd8d2651f43a994719a69fce62c34247de5f90b96274a" exitCode=0 Feb 19 22:16:54 crc kubenswrapper[4886]: I0219 22:16:54.983182 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lh9xk" event={"ID":"344485c9-a591-4ab9-80a0-bd456d422275","Type":"ContainerDied","Data":"b78ca39eece552cdbfebd8d2651f43a994719a69fce62c34247de5f90b96274a"} Feb 19 22:16:54 crc kubenswrapper[4886]: I0219 22:16:54.983695 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lh9xk" event={"ID":"344485c9-a591-4ab9-80a0-bd456d422275","Type":"ContainerDied","Data":"43e8fc19f533f620c6b23e88859343e4e560aea75caf03dca97fc8a54a4aeea8"} Feb 19 22:16:54 crc kubenswrapper[4886]: I0219 22:16:54.983709 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43e8fc19f533f620c6b23e88859343e4e560aea75caf03dca97fc8a54a4aeea8" Feb 19 22:16:55 crc kubenswrapper[4886]: I0219 22:16:55.085566 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lh9xk" Feb 19 22:16:55 crc kubenswrapper[4886]: I0219 22:16:55.122427 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/344485c9-a591-4ab9-80a0-bd456d422275-utilities\") pod \"344485c9-a591-4ab9-80a0-bd456d422275\" (UID: \"344485c9-a591-4ab9-80a0-bd456d422275\") " Feb 19 22:16:55 crc kubenswrapper[4886]: I0219 22:16:55.122546 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/344485c9-a591-4ab9-80a0-bd456d422275-catalog-content\") pod \"344485c9-a591-4ab9-80a0-bd456d422275\" (UID: \"344485c9-a591-4ab9-80a0-bd456d422275\") " Feb 19 22:16:55 crc kubenswrapper[4886]: I0219 22:16:55.122844 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9j2p\" (UniqueName: \"kubernetes.io/projected/344485c9-a591-4ab9-80a0-bd456d422275-kube-api-access-f9j2p\") pod \"344485c9-a591-4ab9-80a0-bd456d422275\" (UID: \"344485c9-a591-4ab9-80a0-bd456d422275\") " Feb 19 22:16:55 crc kubenswrapper[4886]: I0219 22:16:55.123377 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/344485c9-a591-4ab9-80a0-bd456d422275-utilities" (OuterVolumeSpecName: "utilities") pod "344485c9-a591-4ab9-80a0-bd456d422275" (UID: "344485c9-a591-4ab9-80a0-bd456d422275"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 22:16:55 crc kubenswrapper[4886]: I0219 22:16:55.140326 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/344485c9-a591-4ab9-80a0-bd456d422275-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 22:16:55 crc kubenswrapper[4886]: I0219 22:16:55.141361 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/344485c9-a591-4ab9-80a0-bd456d422275-kube-api-access-f9j2p" (OuterVolumeSpecName: "kube-api-access-f9j2p") pod "344485c9-a591-4ab9-80a0-bd456d422275" (UID: "344485c9-a591-4ab9-80a0-bd456d422275"). InnerVolumeSpecName "kube-api-access-f9j2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 22:16:55 crc kubenswrapper[4886]: I0219 22:16:55.149441 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/344485c9-a591-4ab9-80a0-bd456d422275-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "344485c9-a591-4ab9-80a0-bd456d422275" (UID: "344485c9-a591-4ab9-80a0-bd456d422275"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 22:16:55 crc kubenswrapper[4886]: I0219 22:16:55.242723 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/344485c9-a591-4ab9-80a0-bd456d422275-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 22:16:55 crc kubenswrapper[4886]: I0219 22:16:55.242766 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9j2p\" (UniqueName: \"kubernetes.io/projected/344485c9-a591-4ab9-80a0-bd456d422275-kube-api-access-f9j2p\") on node \"crc\" DevicePath \"\"" Feb 19 22:16:55 crc kubenswrapper[4886]: I0219 22:16:55.993871 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lh9xk" Feb 19 22:16:56 crc kubenswrapper[4886]: I0219 22:16:56.027587 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lh9xk"] Feb 19 22:16:56 crc kubenswrapper[4886]: I0219 22:16:56.038684 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lh9xk"] Feb 19 22:16:56 crc kubenswrapper[4886]: I0219 22:16:56.070768 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="6c1275e6-3c64-49dc-9aa2-308cda6e4772" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 22:16:56 crc kubenswrapper[4886]: I0219 22:16:56.361253 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 22:16:56 crc kubenswrapper[4886]: E0219 22:16:56.361987 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="344485c9-a591-4ab9-80a0-bd456d422275" containerName="extract-utilities" Feb 19 22:16:56 crc kubenswrapper[4886]: I0219 22:16:56.362068 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="344485c9-a591-4ab9-80a0-bd456d422275" containerName="extract-utilities" Feb 19 22:16:56 crc kubenswrapper[4886]: E0219 22:16:56.362134 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="344485c9-a591-4ab9-80a0-bd456d422275" containerName="registry-server" Feb 19 22:16:56 crc kubenswrapper[4886]: I0219 22:16:56.362185 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="344485c9-a591-4ab9-80a0-bd456d422275" containerName="registry-server" Feb 19 22:16:56 crc kubenswrapper[4886]: E0219 22:16:56.362306 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a46475-fb7c-41ff-ba13-98139467fd86" containerName="tempest-tests-tempest-tests-runner" Feb 19 22:16:56 crc kubenswrapper[4886]: I0219 22:16:56.362377 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a46475-fb7c-41ff-ba13-98139467fd86" containerName="tempest-tests-tempest-tests-runner" Feb 19 22:16:56 crc kubenswrapper[4886]: E0219 22:16:56.362463 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="344485c9-a591-4ab9-80a0-bd456d422275" containerName="extract-content" Feb 19 22:16:56 crc kubenswrapper[4886]: I0219 22:16:56.362528 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="344485c9-a591-4ab9-80a0-bd456d422275" containerName="extract-content" Feb 19 22:16:56 crc kubenswrapper[4886]: I0219 22:16:56.362750 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="344485c9-a591-4ab9-80a0-bd456d422275" containerName="registry-server" Feb 19 22:16:56 crc kubenswrapper[4886]: I0219 22:16:56.362768 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5a46475-fb7c-41ff-ba13-98139467fd86" containerName="tempest-tests-tempest-tests-runner" Feb 19 22:16:56 crc kubenswrapper[4886]: I0219 22:16:56.363767 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 22:16:56 crc kubenswrapper[4886]: I0219 22:16:56.369254 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-pdgpv" Feb 19 22:16:56 crc kubenswrapper[4886]: I0219 22:16:56.372477 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 22:16:56 crc kubenswrapper[4886]: I0219 22:16:56.473594 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"16b918db-fb1f-43b1-8cca-7200afc2ca6d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 22:16:56 crc kubenswrapper[4886]: I0219 22:16:56.473722 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hs5g\" (UniqueName: \"kubernetes.io/projected/16b918db-fb1f-43b1-8cca-7200afc2ca6d-kube-api-access-4hs5g\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"16b918db-fb1f-43b1-8cca-7200afc2ca6d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 22:16:56 crc kubenswrapper[4886]: I0219 22:16:56.575913 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hs5g\" (UniqueName: \"kubernetes.io/projected/16b918db-fb1f-43b1-8cca-7200afc2ca6d-kube-api-access-4hs5g\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"16b918db-fb1f-43b1-8cca-7200afc2ca6d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 22:16:56 crc kubenswrapper[4886]: I0219 22:16:56.576397 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"16b918db-fb1f-43b1-8cca-7200afc2ca6d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 22:16:56 crc kubenswrapper[4886]: I0219 22:16:56.576862 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"16b918db-fb1f-43b1-8cca-7200afc2ca6d\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 22:16:56 crc kubenswrapper[4886]: I0219 22:16:56.597605 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hs5g\" (UniqueName: \"kubernetes.io/projected/16b918db-fb1f-43b1-8cca-7200afc2ca6d-kube-api-access-4hs5g\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"16b918db-fb1f-43b1-8cca-7200afc2ca6d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 22:16:56 crc kubenswrapper[4886]: I0219 22:16:56.619550 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="344485c9-a591-4ab9-80a0-bd456d422275" path="/var/lib/kubelet/pods/344485c9-a591-4ab9-80a0-bd456d422275/volumes" Feb 19 22:16:56 crc kubenswrapper[4886]: I0219 22:16:56.625534 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"16b918db-fb1f-43b1-8cca-7200afc2ca6d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 22:16:56 crc kubenswrapper[4886]: I0219 22:16:56.685287 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 22:16:57 crc kubenswrapper[4886]: I0219 22:16:57.152219 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 22:16:57 crc kubenswrapper[4886]: W0219 22:16:57.154068 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16b918db_fb1f_43b1_8cca_7200afc2ca6d.slice/crio-6e7416f3df3a647691d45bc3348e275adf85095403a76e73f3de93241fb6f688 WatchSource:0}: Error finding container 6e7416f3df3a647691d45bc3348e275adf85095403a76e73f3de93241fb6f688: Status 404 returned error can't find the container with id 6e7416f3df3a647691d45bc3348e275adf85095403a76e73f3de93241fb6f688 Feb 19 22:16:57 crc kubenswrapper[4886]: I0219 22:16:57.350730 4886 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 22:16:57 crc kubenswrapper[4886]: I0219 22:16:57.351057 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 22:16:57 crc kubenswrapper[4886]: I0219 22:16:57.351104 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 22:16:57 crc kubenswrapper[4886]: I0219 22:16:57.352021 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"5fcff124ee16bbc4de9ebed7e59f5d302560e07e2cd2ed0e7e005909317463c8"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 19 22:16:57 crc kubenswrapper[4886]: I0219 22:16:57.352142 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://5fcff124ee16bbc4de9ebed7e59f5d302560e07e2cd2ed0e7e005909317463c8" gracePeriod=30 Feb 19 22:16:58 crc kubenswrapper[4886]: I0219 22:16:58.027213 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"16b918db-fb1f-43b1-8cca-7200afc2ca6d","Type":"ContainerStarted","Data":"6e7416f3df3a647691d45bc3348e275adf85095403a76e73f3de93241fb6f688"} Feb 19 22:16:59 crc kubenswrapper[4886]: I0219 22:16:59.039164 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"16b918db-fb1f-43b1-8cca-7200afc2ca6d","Type":"ContainerStarted","Data":"5a9daf96dc787e51f48d84e85d4b5c31c7dec0c672ca271e306ffa82e24e15db"} Feb 19 22:16:59 crc kubenswrapper[4886]: I0219 22:16:59.062237 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.916834814 podStartE2EDuration="3.062212517s" podCreationTimestamp="2026-02-19 22:16:56 +0000 UTC" firstStartedPulling="2026-02-19 22:16:57.157675428 +0000 UTC m=+4647.785518478" lastFinishedPulling="2026-02-19 22:16:58.303053131 +0000 UTC m=+4648.930896181" observedRunningTime="2026-02-19 22:16:59.051500722 +0000 UTC m=+4649.679343772" watchObservedRunningTime="2026-02-19 22:16:59.062212517 +0000 UTC m=+4649.690055577" Feb 19 22:16:59 crc kubenswrapper[4886]: I0219 22:16:59.705692 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l7kln" Feb 19 22:17:00 crc kubenswrapper[4886]: I0219 22:17:00.721361 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hq9np" Feb 19 22:17:00 crc kubenswrapper[4886]: I0219 22:17:00.721900 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hq9np" Feb 19 22:17:01 crc kubenswrapper[4886]: I0219 22:17:01.076054 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="6c1275e6-3c64-49dc-9aa2-308cda6e4772" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 22:17:01 crc kubenswrapper[4886]: I0219 22:17:01.775464 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hq9np" podUID="268ec8b6-20c9-426c-8ae3-6d912e1ebcdd" containerName="registry-server" probeResult="failure" output=< Feb 19 22:17:01 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 22:17:01 crc kubenswrapper[4886]: > Feb 19 22:17:05 crc kubenswrapper[4886]: I0219 22:17:05.072494 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-75f8dcd7db-4522m" Feb 19 22:17:06 crc kubenswrapper[4886]: I0219 22:17:06.077012 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="6c1275e6-3c64-49dc-9aa2-308cda6e4772" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 22:17:11 crc kubenswrapper[4886]: I0219 22:17:11.073885 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="6c1275e6-3c64-49dc-9aa2-308cda6e4772" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 22:17:11 crc kubenswrapper[4886]: I0219 22:17:11.799691 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hq9np" podUID="268ec8b6-20c9-426c-8ae3-6d912e1ebcdd" containerName="registry-server" probeResult="failure" output=< Feb 19 22:17:11 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 22:17:11 crc kubenswrapper[4886]: > Feb 19 22:17:16 crc kubenswrapper[4886]: I0219 22:17:16.071182 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="6c1275e6-3c64-49dc-9aa2-308cda6e4772" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 22:17:21 crc kubenswrapper[4886]: I0219 22:17:21.075762 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="6c1275e6-3c64-49dc-9aa2-308cda6e4772" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 22:17:22 crc kubenswrapper[4886]: I0219 22:17:22.540178 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hq9np" podUID="268ec8b6-20c9-426c-8ae3-6d912e1ebcdd" containerName="registry-server" probeResult="failure" output=< Feb 19 22:17:22 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Feb 19 22:17:22 crc kubenswrapper[4886]: > Feb 19 22:17:26 crc kubenswrapper[4886]: I0219 22:17:26.085356 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="6c1275e6-3c64-49dc-9aa2-308cda6e4772" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 22:17:28 crc kubenswrapper[4886]: I0219 22:17:28.391494 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log" Feb 19 22:17:28 crc kubenswrapper[4886]: I0219 22:17:28.395250 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 19 22:17:28 crc kubenswrapper[4886]: I0219 22:17:28.396635 4886 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="5fcff124ee16bbc4de9ebed7e59f5d302560e07e2cd2ed0e7e005909317463c8" exitCode=137 Feb 19 22:17:28 crc kubenswrapper[4886]: I0219 22:17:28.396674 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"5fcff124ee16bbc4de9ebed7e59f5d302560e07e2cd2ed0e7e005909317463c8"} Feb 19 22:17:28 crc kubenswrapper[4886]: I0219 22:17:28.396698 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cd9792adf29f2c4d3dbfb71294011cecf284f4d91d9973971251bcab8095b1e0"} Feb 19 22:17:28 crc kubenswrapper[4886]: I0219 22:17:28.396714 4886 scope.go:117] "RemoveContainer" containerID="a89738ee3e7bc021fddcc3ca5eea8eb3f8f9511340b9e6dc038bf72db46dd4af" Feb 19 22:17:29 crc kubenswrapper[4886]: I0219 22:17:29.407902 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log" Feb 19 22:17:31 crc kubenswrapper[4886]: I0219 22:17:31.072760 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="6c1275e6-3c64-49dc-9aa2-308cda6e4772" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 22:17:31 crc kubenswrapper[4886]: I0219 22:17:31.676108 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hq9np" Feb 19 22:17:31 crc kubenswrapper[4886]: I0219 22:17:31.743597 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hq9np" Feb 19 22:17:31 crc kubenswrapper[4886]: I0219 22:17:31.932222 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hq9np"] Feb 19 22:17:33 crc kubenswrapper[4886]: I0219 22:17:33.455734 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hq9np" podUID="268ec8b6-20c9-426c-8ae3-6d912e1ebcdd" containerName="registry-server" containerID="cri-o://8202f07f9c2b77b25addfc3dcc7210f5652b88c41e01220302e72560c43411f0" gracePeriod=2 Feb 19 22:17:33 crc kubenswrapper[4886]: I0219 22:17:33.965746 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 22:17:34 crc kubenswrapper[4886]: I0219 22:17:34.476356 4886 generic.go:334] "Generic (PLEG): container finished" podID="268ec8b6-20c9-426c-8ae3-6d912e1ebcdd" containerID="8202f07f9c2b77b25addfc3dcc7210f5652b88c41e01220302e72560c43411f0" exitCode=0 Feb 19 22:17:34 crc kubenswrapper[4886]: I0219 22:17:34.476627 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq9np" event={"ID":"268ec8b6-20c9-426c-8ae3-6d912e1ebcdd","Type":"ContainerDied","Data":"8202f07f9c2b77b25addfc3dcc7210f5652b88c41e01220302e72560c43411f0"} Feb 19 22:17:35 crc kubenswrapper[4886]: I0219 22:17:35.001676 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hq9np" Feb 19 22:17:35 crc kubenswrapper[4886]: I0219 22:17:35.148007 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/268ec8b6-20c9-426c-8ae3-6d912e1ebcdd-catalog-content\") pod \"268ec8b6-20c9-426c-8ae3-6d912e1ebcdd\" (UID: \"268ec8b6-20c9-426c-8ae3-6d912e1ebcdd\") " Feb 19 22:17:35 crc kubenswrapper[4886]: I0219 22:17:35.148393 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2cjk\" (UniqueName: \"kubernetes.io/projected/268ec8b6-20c9-426c-8ae3-6d912e1ebcdd-kube-api-access-s2cjk\") pod \"268ec8b6-20c9-426c-8ae3-6d912e1ebcdd\" (UID: \"268ec8b6-20c9-426c-8ae3-6d912e1ebcdd\") " Feb 19 22:17:35 crc kubenswrapper[4886]: I0219 22:17:35.148641 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/268ec8b6-20c9-426c-8ae3-6d912e1ebcdd-utilities\") pod \"268ec8b6-20c9-426c-8ae3-6d912e1ebcdd\" (UID: \"268ec8b6-20c9-426c-8ae3-6d912e1ebcdd\") " Feb 19 22:17:35 crc kubenswrapper[4886]: I0219 22:17:35.149240 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/268ec8b6-20c9-426c-8ae3-6d912e1ebcdd-utilities" (OuterVolumeSpecName: "utilities") pod "268ec8b6-20c9-426c-8ae3-6d912e1ebcdd" (UID: "268ec8b6-20c9-426c-8ae3-6d912e1ebcdd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 22:17:35 crc kubenswrapper[4886]: I0219 22:17:35.150002 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/268ec8b6-20c9-426c-8ae3-6d912e1ebcdd-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 22:17:35 crc kubenswrapper[4886]: I0219 22:17:35.167725 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/268ec8b6-20c9-426c-8ae3-6d912e1ebcdd-kube-api-access-s2cjk" (OuterVolumeSpecName: "kube-api-access-s2cjk") pod "268ec8b6-20c9-426c-8ae3-6d912e1ebcdd" (UID: "268ec8b6-20c9-426c-8ae3-6d912e1ebcdd"). InnerVolumeSpecName "kube-api-access-s2cjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 22:17:35 crc kubenswrapper[4886]: I0219 22:17:35.253062 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2cjk\" (UniqueName: \"kubernetes.io/projected/268ec8b6-20c9-426c-8ae3-6d912e1ebcdd-kube-api-access-s2cjk\") on node \"crc\" DevicePath \"\"" Feb 19 22:17:35 crc kubenswrapper[4886]: I0219 22:17:35.298984 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/268ec8b6-20c9-426c-8ae3-6d912e1ebcdd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "268ec8b6-20c9-426c-8ae3-6d912e1ebcdd" (UID: "268ec8b6-20c9-426c-8ae3-6d912e1ebcdd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 22:17:35 crc kubenswrapper[4886]: I0219 22:17:35.355848 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/268ec8b6-20c9-426c-8ae3-6d912e1ebcdd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 22:17:35 crc kubenswrapper[4886]: I0219 22:17:35.495159 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq9np" event={"ID":"268ec8b6-20c9-426c-8ae3-6d912e1ebcdd","Type":"ContainerDied","Data":"5aa79af6b521fbef50f017961684623929bb4ee4d6224a6c525e68ee7f36d55f"} Feb 19 22:17:35 crc kubenswrapper[4886]: I0219 22:17:35.495246 4886 scope.go:117] "RemoveContainer" containerID="8202f07f9c2b77b25addfc3dcc7210f5652b88c41e01220302e72560c43411f0" Feb 19 22:17:35 crc kubenswrapper[4886]: I0219 22:17:35.495249 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hq9np" Feb 19 22:17:35 crc kubenswrapper[4886]: I0219 22:17:35.528696 4886 scope.go:117] "RemoveContainer" containerID="0986e65ee55159f6e8ac04ef269377d032dfea060241e1c7155b55994bbdd805" Feb 19 22:17:35 crc kubenswrapper[4886]: I0219 22:17:35.563089 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hq9np"] Feb 19 22:17:35 crc kubenswrapper[4886]: I0219 22:17:35.578609 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hq9np"] Feb 19 22:17:35 crc kubenswrapper[4886]: I0219 22:17:35.589122 4886 scope.go:117] "RemoveContainer" containerID="176d47a6340ce25733db20becaa78c81a853a5f60cfec9f52689f1feabd94a4d" Feb 19 22:17:36 crc kubenswrapper[4886]: I0219 22:17:36.092507 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="6c1275e6-3c64-49dc-9aa2-308cda6e4772" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 22:17:36 crc kubenswrapper[4886]: I0219 22:17:36.621056 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="268ec8b6-20c9-426c-8ae3-6d912e1ebcdd" path="/var/lib/kubelet/pods/268ec8b6-20c9-426c-8ae3-6d912e1ebcdd/volumes" Feb 19 22:17:37 crc kubenswrapper[4886]: I0219 22:17:37.350981 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 22:17:37 crc kubenswrapper[4886]: I0219 22:17:37.355662 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 22:17:41 crc kubenswrapper[4886]: I0219 22:17:41.081967 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="6c1275e6-3c64-49dc-9aa2-308cda6e4772" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 22:17:41 crc kubenswrapper[4886]: I0219 22:17:41.082045 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 22:17:41 crc kubenswrapper[4886]: I0219 22:17:41.083340 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"68264b2bf7685b48302e397e71c3b9e78dca4267ae23bcf50dc9a86e4536be0f"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed startup probe, will be restarted" Feb 19 22:17:41 crc kubenswrapper[4886]: I0219 22:17:41.083391 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6c1275e6-3c64-49dc-9aa2-308cda6e4772" containerName="cinder-scheduler" containerID="cri-o://68264b2bf7685b48302e397e71c3b9e78dca4267ae23bcf50dc9a86e4536be0f" gracePeriod=30 Feb 19 22:17:43 crc kubenswrapper[4886]: I0219 22:17:43.973334 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 22:17:51 crc kubenswrapper[4886]: I0219 22:17:51.110937 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 19 22:17:51 crc kubenswrapper[4886]: I0219 22:17:51.260802 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 19 22:17:52 crc kubenswrapper[4886]: I0219 22:17:52.437937 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 19 22:17:52 crc kubenswrapper[4886]: I0219 22:17:52.563838 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 19 22:18:12 crc kubenswrapper[4886]: I0219 22:18:12.008522 4886 generic.go:334] "Generic (PLEG): container finished" podID="6c1275e6-3c64-49dc-9aa2-308cda6e4772" containerID="68264b2bf7685b48302e397e71c3b9e78dca4267ae23bcf50dc9a86e4536be0f" exitCode=137 Feb 19 22:18:12 crc kubenswrapper[4886]: I0219 22:18:12.009093 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6c1275e6-3c64-49dc-9aa2-308cda6e4772","Type":"ContainerDied","Data":"68264b2bf7685b48302e397e71c3b9e78dca4267ae23bcf50dc9a86e4536be0f"} Feb 19 22:18:12 crc kubenswrapper[4886]: I0219 22:18:12.009135 4886 scope.go:117] "RemoveContainer" containerID="d2b3990b6c91d3065337867bc53cc174c5b1d6fb9ea442e213cdf3039f807277" Feb 19 22:18:13 crc kubenswrapper[4886]: I0219 22:18:13.022818 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6c1275e6-3c64-49dc-9aa2-308cda6e4772","Type":"ContainerStarted","Data":"90dff3343b34c906da56b92945123918f1cc630b1b8194e597e7b1fb496489ff"} Feb 19 22:18:16 crc kubenswrapper[4886]: I0219 22:18:16.055561 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 22:18:18 crc kubenswrapper[4886]: I0219 22:18:18.324210 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 22:18:18 crc kubenswrapper[4886]: I0219 22:18:18.324685 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 22:18:21 crc kubenswrapper[4886]: I0219 22:18:21.084588 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 22:18:31 crc kubenswrapper[4886]: I0219 22:18:31.793076 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="aeb6523b-7fed-4c9a-87c2-b531f22c9a1c" containerName="galera" probeResult="failure" output="command timed out" Feb 19 22:18:31 crc kubenswrapper[4886]: I0219 22:18:31.794105 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="aeb6523b-7fed-4c9a-87c2-b531f22c9a1c" containerName="galera" probeResult="failure" output="command timed out" Feb 19 22:18:48 crc kubenswrapper[4886]: I0219 22:18:48.324353 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 22:18:48 crc kubenswrapper[4886]: I0219 22:18:48.324992 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 22:18:49 crc kubenswrapper[4886]: I0219 22:18:49.516917 4886 generic.go:334] "Generic (PLEG): container finished" podID="a55e7bbd-33a0-46c7-b08b-bf71421bd1bf" containerID="fc38f9b1ac58fcaea104c158b4f031beac2385497d750e96b62357f29fd370f0" exitCode=0 Feb 19 22:18:49 crc kubenswrapper[4886]: I0219 22:18:49.517020 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" event={"ID":"a55e7bbd-33a0-46c7-b08b-bf71421bd1bf","Type":"ContainerDied","Data":"fc38f9b1ac58fcaea104c158b4f031beac2385497d750e96b62357f29fd370f0"} Feb 19 22:18:50 crc kubenswrapper[4886]: I0219 22:18:50.530337 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" event={"ID":"a55e7bbd-33a0-46c7-b08b-bf71421bd1bf","Type":"ContainerStarted","Data":"567b2ae5f5fa57162fc70f1da6cc2de4c49ca809abd49331e6416d1ca0d3d260"} Feb 19 22:19:08 crc kubenswrapper[4886]: I0219 22:19:08.098999 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 22:19:08 crc kubenswrapper[4886]: I0219 22:19:08.099625 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 22:19:18 crc kubenswrapper[4886]: I0219 22:19:18.324927 4886 patch_prober.go:28] interesting pod/machine-config-daemon-6stm5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 22:19:18 crc kubenswrapper[4886]: I0219 22:19:18.325668 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 22:19:18 crc kubenswrapper[4886]: I0219 22:19:18.325760 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" Feb 19 22:19:18 crc kubenswrapper[4886]: I0219 22:19:18.327230 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3f32c6f0dab5e6f6620d1e0dc38f6bc8583e537892ba22df70a1b0dc20ef3b5e"} pod="openshift-machine-config-operator/machine-config-daemon-6stm5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 22:19:18 crc kubenswrapper[4886]: I0219 22:19:18.327364 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" podUID="b096c32d-4192-4529-bc55-b05d09004007" containerName="machine-config-daemon" containerID="cri-o://3f32c6f0dab5e6f6620d1e0dc38f6bc8583e537892ba22df70a1b0dc20ef3b5e" gracePeriod=600 Feb 19 22:19:18 crc kubenswrapper[4886]: I0219 22:19:18.898961 4886 generic.go:334] "Generic (PLEG): container finished" podID="b096c32d-4192-4529-bc55-b05d09004007" containerID="3f32c6f0dab5e6f6620d1e0dc38f6bc8583e537892ba22df70a1b0dc20ef3b5e" exitCode=0 Feb 19 22:19:18 crc kubenswrapper[4886]: I0219 22:19:18.899027 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerDied","Data":"3f32c6f0dab5e6f6620d1e0dc38f6bc8583e537892ba22df70a1b0dc20ef3b5e"} Feb 19 22:19:18 crc kubenswrapper[4886]: I0219 22:19:18.899650 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6stm5" event={"ID":"b096c32d-4192-4529-bc55-b05d09004007","Type":"ContainerStarted","Data":"7cebfcf2f772157a713dd5b316dbfc5b11926131eef2a56f243e9096d5ab44cc"} Feb 19 22:19:18 crc kubenswrapper[4886]: I0219 22:19:18.899671 4886 scope.go:117] "RemoveContainer" containerID="7f2a697a6424d63a1b4da6b23f76491e34e675652e1bb658f31831a264d276c5" Feb 19 22:19:28 crc kubenswrapper[4886]: I0219 22:19:28.104374 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6" Feb 19 22:19:28 crc kubenswrapper[4886]: I0219 22:19:28.109742 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-799cc74bc-wv5f6"